This is a table of type trigram and their frequencies. Use it to search & browse the list to learn more about your study carrel.
trigram | frequency |
---|---|
the number of | 108 |
in order to | 77 |
in information retrieval | 63 |
in this paper | 61 |
we use the | 59 |
as well as | 56 |
based on the | 54 |
a set of | 50 |
learning to rank | 47 |
in terms of | 46 |
with respect to | 46 |
information retrieval doi | 43 |
advances in information | 43 |
the set of | 43 |
the performance of | 42 |
in this work | 38 |
one of the | 36 |
on the other | 34 |
in this section | 33 |
overview of the | 29 |
the effectiveness of | 29 |
a number of | 29 |
the other hand | 28 |
for information retrieval | 28 |
the results of | 28 |
the importance of | 27 |
we propose a | 26 |
to predict the | 25 |
due to the | 25 |
shown in table | 25 |
in addition to | 24 |
used in the | 23 |
to the query | 23 |
the quality of | 23 |
can be used | 23 |
of a document | 22 |
is used to | 22 |
according to the | 21 |
part of the | 21 |
each of the | 20 |
we do not | 20 |
be used to | 19 |
convolutional neural networks | 19 |
a sequence of | 19 |
of the query | 19 |
respect to the | 19 |
in the following | 19 |
related to the | 18 |
of a review | 18 |
the value of | 18 |
as shown in | 18 |
in the document | 18 |
in the same | 18 |
similar to the | 17 |
large number of | 17 |
which is the | 17 |
we focus on | 17 |
the task of | 17 |
the local context | 17 |
to the best | 17 |
of seed words | 17 |
in our experiments | 17 |
to evaluate the | 17 |
query terms in | 17 |
the use of | 17 |
to capture the | 17 |
of the proposed | 17 |
in the training | 17 |
state of the | 17 |
the best of | 16 |
in the case | 16 |
best of our | 16 |
propose a novel | 16 |
the training set | 16 |
the impact of | 16 |
the context of | 16 |
a collection of | 16 |
most of the | 16 |
the proposed model | 16 |
to address the | 16 |
the case of | 16 |
in social media | 16 |
of the text | 16 |
a subset of | 16 |
is defined as | 16 |
the test set | 16 |
of our knowledge | 16 |
it can be | 16 |
based on a | 16 |
online learning to | 16 |
distributed representations of | 15 |
representation of the | 15 |
the problem of | 15 |
quality of the | 15 |
of information retrieval | 15 |
is the number | 15 |
of query terms | 15 |
neural networks for | 15 |
query and document | 15 |
the embedding of | 15 |
for each of | 15 |
is based on | 15 |
text and image | 15 |
that can be | 15 |
to compute the | 15 |
the clef ehealth | 15 |
with the same | 15 |
in the original | 15 |
it does not | 14 |
of the task | 14 |
of the model | 14 |
we want to | 14 |
we introduce the | 14 |
the effect of | 14 |
performance of the | 14 |
performance of our | 14 |
the result list | 14 |
score of a | 14 |
our proposed model | 14 |
in the context | 14 |
product attention layer | 14 |
we use a | 14 |
results show that | 14 |
to learn the | 14 |
of the clef | 14 |
along with the | 13 |
of the documents | 13 |
terms in the | 13 |
well as the | 13 |
the seed words | 13 |
the concatenation of | 13 |
table shows the | 13 |
goal is to | 13 |
sequential question answering | 13 |
be able to | 13 |
shown in fig | 13 |
in a document | 13 |
of the review | 13 |
which can be | 13 |
such as the | 13 |
as a result | 13 |
generated schema labels | 13 |
user and item | 13 |
of our model | 13 |
of the document | 13 |
aspects of the | 13 |
there is a | 13 |
have the same | 13 |
a list of | 12 |
a document is | 12 |
parameters of the | 12 |
the product attention | 12 |
is able to | 12 |
likely to be | 12 |
of this paper | 12 |
evaluate the performance | 12 |
of the first | 12 |
the evaluation of | 12 |
show that the | 12 |
relevant to the | 12 |
similarity between the | 12 |
a variety of | 12 |
of schema labels | 12 |
in natural language | 12 |
of the art | 12 |
have been proposed | 12 |
a large number | 12 |
refers to the | 12 |
the embedding space | 12 |
we assume that | 12 |
cold start scenario | 12 |
the model is | 12 |
it is not | 12 |
sessions and tasks | 12 |
the proposed approach | 12 |
this is a | 12 |
the end of | 12 |
are shown in | 12 |
the fact that | 12 |
queries and documents | 12 |
of candidate rankers | 12 |
we report the | 11 |
the training data | 11 |
the parameters of | 11 |
fang et al | 11 |
of the original | 11 |
we evaluate the | 11 |
of a domain | 11 |
to this end | 11 |
of a query | 11 |
the support of | 11 |
between query and | 11 |
clef ehealth evaluation | 11 |
the sd c | 11 |
of words and | 11 |
in social networks | 11 |
results of the | 11 |
to estimate the | 11 |
words and phrases | 11 |
in the first | 11 |
for a given | 11 |
this paper we | 11 |
in the top | 11 |
identification and verification | 11 |
we observe that | 11 |
the contact recommendation | 11 |
local context of | 11 |
more likely to | 11 |
each query term | 11 |
to the user | 11 |
warm start scenario | 11 |
some of the | 11 |
of the same | 11 |
show that our | 11 |
at least one | 11 |
we used the | 11 |
total number of | 11 |
the review embedding | 11 |
information from the | 11 |
can be seen | 11 |
to a query | 11 |
results of our | 11 |
is similar to | 11 |
documents in the | 10 |
consists of a | 10 |
of the paper | 10 |
at the end | 10 |
the attention weights | 10 |
words in the | 10 |
representations of words | 10 |
the text and | 10 |
of the evaluation | 10 |
on social media | 10 |
the role of | 10 |
the form of | 10 |
the representation of | 10 |
in the network | 10 |
during text preprocessing | 10 |
we need to | 10 |
we found that | 10 |
schema label generation | 10 |
based on their | 10 |
the cold start | 10 |
the probability that | 10 |
to explore the | 10 |
for future work | 10 |
of the generators | 10 |
the semantics of | 10 |
the difficulty of | 10 |
compared to the | 10 |
of our proposed | 10 |
we describe the | 10 |
in the second | 10 |
we present a | 10 |
a query claim | 10 |
in the non | 10 |
number of documents | 10 |
scores of the | 10 |
focus on the | 10 |
analysis of the | 10 |
the lack of | 10 |
we compare our | 10 |
note that the | 10 |
we propose to | 10 |
as a single | 10 |
verification of claims | 10 |
described in sect | 10 |
and verification of | 10 |
provided by the | 10 |
of the data | 10 |
fan et al | 10 |
we can see | 10 |
dataless text classification | 10 |
at test time | 10 |
title and description | 10 |
document network embedding | 10 |
the total number | 10 |
we refer to | 10 |
are used to | 10 |
we compute the | 10 |
embedding of the | 10 |
input to the | 10 |
the original paper | 10 |
the missing views | 10 |
we aim to | 10 |
in the form | 10 |
importance of the | 9 |
to make the | 9 |
for the task | 9 |
illustrated in fig | 9 |
disjunctively written languages | 9 |
supported by the | 9 |
sentence s i | 9 |
sentiment attention layer | 9 |
in comparison to | 9 |
the length of | 9 |
the cosine similarity | 9 |
number of candidate | 9 |
are able to | 9 |
the query and | 9 |
within the same | 9 |
can be considered | 9 |
a method for | 9 |
ehealth evaluation lab | 9 |
introduced in sect | 9 |
for keyphrase extraction | 9 |
model is trained | 9 |
corresponds to the | 9 |
we consider the | 9 |
the candidate user | 9 |
a ranked list | 9 |
can be found | 9 |
word and topic | 9 |
statistics of the | 9 |
sum of the | 9 |
based query sequences | 9 |
in previous work | 9 |
the relevance of | 9 |
proceedings of the | 9 |
posting lists in | 9 |
additional supporting information | 9 |
the same as | 9 |
are used as | 9 |
trained on the | 9 |
one or more | 9 |
the document is | 9 |
stepwise recipe dataset | 9 |
that our model | 9 |
of documents in | 9 |
referred to as | 9 |
bantu language dataset | 9 |
score of the | 9 |
description of the | 9 |
semantics of the | 9 |
found to be | 9 |
automatic identification and | 9 |
the choice of | 9 |
number of words | 9 |
depend on the | 9 |
early risk prediction | 9 |
an overview of | 9 |
and it is | 9 |
of social media | 9 |
in our case | 9 |
for each query | 9 |
a chemical reaction | 9 |
in which the | 9 |
each of these | 9 |
in the embedding | 9 |
is set to | 8 |
are interested in | 8 |
from the same | 8 |
the remainder of | 8 |
factoid question answering | 8 |
contact recommendation task | 8 |
of the two | 8 |
of the image | 8 |
better than the | 8 |
early detection of | 8 |
the probability of | 8 |
a total of | 8 |
in the next | 8 |
of the individual | 8 |
is used for | 8 |
by the authors | 8 |
method for stochastic | 8 |
of the words | 8 |
in this case | 8 |
addition to the | 8 |
weights of the | 8 |
we present the | 8 |
we also report | 8 |
the semantic similarity | 8 |
of the dataset | 8 |
of the neural | 8 |
is used as | 8 |
used in our | 8 |
are more likely | 8 |
to provide a | 8 |
and the number | 8 |
the sentiment attention | 8 |
the results show | 8 |
we propose an | 8 |
of query term | 8 |
used as the | 8 |
to the original | 8 |
contribute to the | 8 |
the neighbors of | 8 |
there are many | 8 |
the warm start | 8 |
on the internet | 8 |
importance of each | 8 |
each posting list | 8 |
of the word | 8 |
the goal is | 8 |
this work is | 8 |
and their compositionality | 8 |
the word embedding | 8 |
novelty and diversity | 8 |
in the graph | 8 |
the idea of | 8 |
this work was | 8 |
is the set | 8 |
this type of | 8 |
irony detection in | 8 |
terms of the | 8 |
the first one | 8 |
relevance score of | 8 |
from the text | 8 |
the scoring function | 8 |
described in the | 8 |
of recommender systems | 8 |
there is no | 8 |
dmp common standards | 8 |
for this task | 8 |
and phrases and | 8 |
as an example | 8 |
and document terms | 8 |
to represent the | 8 |
to address this | 8 |
be found in | 8 |
phrases and their | 8 |
top k results | 8 |
for all the | 8 |
the relevance score | 8 |
word embeddings and | 8 |
named entity recognition | 8 |
zhang et al | 8 |
the combination of | 8 |
our proposed approach | 8 |
i plan to | 8 |
to the following | 8 |
the application of | 8 |
context of query | 8 |
the most relevant | 8 |
of all the | 8 |
the target user | 8 |
the readability of | 8 |
can also be | 8 |
the knowledge graph | 8 |
the presence of | 8 |
of cond gans | 8 |
the ability of | 8 |
effectiveness of the | 8 |
evaluation of the | 8 |
divergence from randomness | 8 |
latent dirichlet allocation | 8 |
of the system | 8 |
is associated with | 8 |
the support scores | 8 |
to identify the | 8 |
we plan to | 8 |
the weights of | 8 |
the transfer layer | 8 |
the results for | 8 |
the top k | 8 |
the definition of | 8 |
the user and | 8 |
may not be | 8 |
the rest of | 8 |
the current state | 7 |
the objective function | 7 |
in the field | 7 |
are based on | 7 |
respect to a | 7 |
with the corresponding | 7 |
from the training | 7 |
this can be | 7 |
to information retrieval | 7 |
an attention mechanism | 7 |
and yelp data | 7 |
sentiment and product | 7 |
a query term | 7 |
using word embeddings | 7 |
global vectors for | 7 |
fed into a | 7 |
relevance scores of | 7 |
significantly better than | 7 |
to be a | 7 |
s and s | 7 |
as opposed to | 7 |
the differences in | 7 |
it has been | 7 |
to understand the | 7 |
the need to | 7 |
is different from | 7 |
ranked list of | 7 |
version of the | 7 |
to the same | 7 |
this means that | 7 |
severity analysis module | 7 |
length of the | 7 |
because of the | 7 |
on top of | 7 |
be used in | 7 |
of a claim | 7 |
for a query | 7 |
training of deep | 7 |
of the bias | 7 |
on the web | 7 |
knowledge graph embedding | 7 |
the field of | 7 |
of deep bidirectional | 7 |
our results show | 7 |
be seen as | 7 |
in the collection | 7 |
to generate a | 7 |
in other words | 7 |
the paper and | 7 |
of the attention | 7 |
the average of | 7 |
in web search | 7 |
ding and suel | 7 |
the jth sentence | 7 |
verbs and adjectives | 7 |
nodes in a | 7 |
would like to | 7 |
the scores of | 7 |
is not available | 7 |
which is a | 7 |
hidden state of | 7 |
for stochastic optimization | 7 |
of posting lists | 7 |
there has been | 7 |
bidirectional transformers for | 7 |
the location mentions | 7 |
models have been | 7 |
have access to | 7 |
n is the | 7 |
as described in | 7 |
and a set | 7 |
the amazon data | 7 |
of the retrieved | 7 |
for language understanding | 7 |
cosine similarity between | 7 |
deep neural networks | 7 |
defined in eq | 7 |
lists in the | 7 |
not able to | 7 |
representation of a | 7 |
the system is | 7 |
shown in the | 7 |
that the user | 7 |
we believe that | 7 |
deep bidirectional transformers | 7 |
we are interested | 7 |
of signs of | 7 |
the severity of | 7 |
word embeddings of | 7 |
as part of | 7 |
we can use | 7 |
of the reviews | 7 |
in our model | 7 |
the neural network | 7 |
natural language processing | 7 |
there are no | 7 |
the level of | 7 |
query term q | 7 |
the learning rate | 7 |
in the cold | 7 |
between the query | 7 |
level attention mechanism | 7 |
to generate the | 7 |
propose a new | 7 |
small number of | 7 |
attention mechanism to | 7 |
of the training | 7 |
the accuracy of | 7 |
is denoted by | 7 |
to assess the | 7 |
of training data | 7 |
between query terms | 7 |
to be able | 7 |
a dataset of | 7 |
as input to | 7 |
this is the | 7 |
label mixed ranking | 7 |
the standard pipeline | 7 |
to learn a | 7 |
schema labels and | 7 |
information retrieval models | 7 |
of a term | 7 |
discussed in sect | 7 |
is relevant to | 7 |
for computing the | 7 |
in the corpus | 7 |
the training process | 7 |
end of the | 7 |
graph convolutional networks | 7 |
a schema label | 7 |
rest of the | 7 |
a knowledge base | 7 |
the target and | 7 |
document language models | 7 |
for a specific | 7 |
for each task | 7 |
from social media | 7 |
convolutional neural network | 7 |
the analysis of | 7 |
schema label mixed | 7 |
our model is | 7 |
the bias goggles | 7 |
embeddings of the | 7 |
transformers for language | 7 |
by the model | 7 |
the negation morpheme | 7 |
we hypothesize that | 7 |
the data table | 7 |
a series of | 7 |
different from the | 7 |
the same time | 7 |
amazon data set | 7 |
into account the | 7 |
we use an | 7 |
and topic vectors | 7 |
of medical articles | 7 |
which we use | 7 |
and test sets | 7 |
the full text | 7 |
the dual approach | 7 |
in the warm | 7 |
such as bm | 7 |
semantic similarity between | 7 |
view of the | 7 |
neural machine translation | 7 |
of the target | 7 |
neural information retrieval | 7 |
models such as | 7 |
the similarity between | 7 |
the sum of | 7 |
the size of | 7 |
similarity of the | 7 |
used for the | 7 |
in all cases | 7 |
domain suggestion mining | 7 |
provided in the | 7 |
we set the | 7 |
of sentences and | 7 |
in the previous | 7 |
recurrent neural networks | 7 |
the work of | 7 |
of a word | 6 |
and the second | 6 |
document is relevant | 6 |
performance on the | 6 |
language modeling framework | 6 |
of the embeddings | 6 |
vectors for word | 6 |
is not a | 6 |
g and g | 6 |
the severity analysis | 6 |
by considering the | 6 |
neighbors of the | 6 |
do not consider | 6 |
of each word | 6 |
model based on | 6 |
is important to | 6 |
q and c | 6 |
should abandon fossil | 6 |
is computed as | 6 |
abandon fossil fuels | 6 |
to the overall | 6 |
to the task | 6 |
a gating mechanism | 6 |
is the same | 6 |
take advantage of | 6 |
and sd c | 6 |
review embedding is | 6 |
a social network | 6 |
improvements in the | 6 |
we also use | 6 |
i is the | 6 |
for our experiments | 6 |
depends on the | 6 |
term q i | 6 |
learning for image | 6 |
advantage of the | 6 |
for word representation | 6 |
model with the | 6 |
better results than | 6 |
conjunctively written languages | 6 |
cases where the | 6 |
depending on the | 6 |
the regression model | 6 |
ij is the | 6 |
community question answering | 6 |
is treated as | 6 |
labs of the | 6 |
the requirement set | 6 |
matching between query | 6 |
we make the | 6 |
the same document | 6 |
in the dataset | 6 |
of the results | 6 |
the words in | 6 |
the learned representations | 6 |
for each word | 6 |
to determine the | 6 |
such that the | 6 |
the hidden state | 6 |
in the paper | 6 |
the case for | 6 |
included in the | 6 |
sd c methods | 6 |
to a single | 6 |
the weight of | 6 |
has been shown | 6 |
that it is | 6 |
in a graph | 6 |
has led to | 6 |
are described in | 6 |
ability of the | 6 |
medium and long | 6 |
consider the following | 6 |
sd c models | 6 |
amount of data | 6 |
the review score | 6 |
as illustrated in | 6 |
seq seq retrieval | 6 |
word embeddings for | 6 |
of schema label | 6 |
we compare the | 6 |
results on a | 6 |
the existence of | 6 |
to better understand | 6 |
defined as follows | 6 |
the word vectors | 6 |
the addition of | 6 |
retrieval based on | 6 |
access to a | 6 |
this paper is | 6 |
the document representations | 6 |
associated with a | 6 |
in a single | 6 |
the development of | 6 |
have focused on | 6 |
amazon and yelp | 6 |
the case where | 6 |
should not be | 6 |
reported in table | 6 |
of the network | 6 |
in which we | 6 |
and sentiment analysis | 6 |
the outputs of | 6 |
to the fact | 6 |
a learning rate | 6 |
the most important | 6 |
we see that | 6 |
computation of the | 6 |
body of work | 6 |
the query terms | 6 |
a review of | 6 |
the frequency of | 6 |
mixed bantu language | 6 |
we showed that | 6 |
to have a | 6 |
do not have | 6 |
could be used | 6 |
the schema label | 6 |
inductive document network | 6 |
impact on the | 6 |
the difference between | 6 |
is the first | 6 |
together with the | 6 |
from the query | 6 |
semantic matching between | 6 |
is that the | 6 |
constraints in the | 6 |
biomedical question answering | 6 |
detection of signs | 6 |
the word level | 6 |
for each dataset | 6 |
take into account | 6 |
in most cases | 6 |
the distribution of | 6 |
distribution of the | 6 |
of query and | 6 |
model can be | 6 |
as in the | 6 |
any of the | 6 |
of a chemical | 6 |
the objective of | 6 |
information retrieval systems | 6 |
for bantu languages | 6 |
stochastic gradient descent | 6 |
the query term | 6 |
integer linear programming | 6 |
for text classification | 6 |
difference between the | 6 |
it is important | 6 |
we have a | 6 |
during the training | 6 |
subset of the | 6 |
yelp data sets | 6 |
bias score of | 6 |
conference and labs | 6 |
for the evaluation | 6 |
to the embedding | 6 |
at the same | 6 |
the design of | 6 |
the system to | 6 |
of the search | 6 |
effectiveness of our | 6 |
the structure of | 6 |
we denote by | 6 |
of the item | 6 |
in our study | 6 |
schema labels can | 6 |
refer to the | 6 |
the web graph | 6 |
the process of | 6 |
support scores of | 6 |
the goal of | 6 |
need to be | 6 |
this is because | 6 |
in the experiments | 6 |
the output of | 6 |
the amount of | 6 |
from a single | 6 |
that there are | 6 |
we introduce a | 6 |
we are able | 6 |
to demonstrate the | 6 |
social media posts | 6 |
similarity between query | 6 |
images and videos | 6 |
indicates that the | 6 |
influence of the | 6 |
the embeddings of | 6 |
the task is | 6 |
of users and | 6 |
the ir models | 6 |
for the contact | 6 |
in the text | 6 |
large amount of | 6 |
the type of | 6 |
the dmp common | 6 |
g r q | 6 |
a document to | 6 |
the word embeddings | 6 |
representations of schema | 6 |
diversity and novelty | 6 |
target user u | 6 |
average of the | 6 |
vector space model | 6 |
which has been | 6 |
the results in | 6 |
for contact recommendation | 6 |
of the previous | 6 |
in the past | 6 |
the domain of | 6 |
and labs of | 6 |
results on the | 6 |
in part by | 6 |
li et al | 6 |
those of the | 6 |
the approximated review | 6 |
outputs of the | 6 |
we find that | 6 |
close to the | 6 |
is important for | 6 |
in line with | 6 |
use of the | 6 |
of the sentence | 6 |
for each node | 6 |
at each interaction | 5 |
location mentions in | 5 |
we show the | 5 |
the system can | 5 |
the next section | 5 |
convolutional networks for | 5 |
a baseline system | 5 |
to study the | 5 |
the most recent | 5 |
the same type | 5 |
logged bandit feedback | 5 |
the similarity of | 5 |
this kind of | 5 |
that are semantically | 5 |
of our approach | 5 |
for sentence classification | 5 |
dataset retrieval task | 5 |
is available at | 5 |
of the checkthat | 5 |
even though the | 5 |
mean average precision | 5 |
and warm start | 5 |
the algorithm is | 5 |
w is the | 5 |
to classify the | 5 |
semantically related to | 5 |
it is common | 5 |
set of candidate | 5 |
on the full | 5 |
p g and | 5 |
corresponds to a | 5 |
useful information from | 5 |
value of the | 5 |
in the performance | 5 |
to the results | 5 |
text retrieval conference | 5 |
by providing a | 5 |
the statistics of | 5 |
standard insights extraction | 5 |
from the web | 5 |
of text passages | 5 |
support score of | 5 |
in contact recommendation | 5 |
results in the | 5 |
a part of | 5 |
to learn how | 5 |
the proposed system | 5 |
at prediction time | 5 |
mentions in tweets | 5 |
in our work | 5 |
on the quality | 5 |
that the proposed | 5 |
in the remainder | 5 |
to tackle the | 5 |
results are shown | 5 |
we study the | 5 |
by fang et | 5 |
are reported in | 5 |
we would like | 5 |
represented by the | 5 |
on automatic identification | 5 |
from the corpus | 5 |
remainder of the | 5 |
use the same | 5 |
in future work | 5 |
multifield document ranking | 5 |
semantic similarity of | 5 |
to analyze the | 5 |
been extensively used | 5 |
graph neural networks | 5 |
our work is | 5 |
models based on | 5 |
generalized language model | 5 |
claims as well | 5 |
value for k | 5 |
in a way | 5 |
from other sentences | 5 |
terms of both | 5 |
listed in table | 5 |
there is an | 5 |
resulting in a | 5 |
theoretical analysis of | 5 |
treated as a | 5 |
content of the | 5 |
contrary to the | 5 |
a variant of | 5 |
a training set | 5 |
the same meaning | 5 |
the features of | 5 |
experimental results on | 5 |
by using the | 5 |
words per sentence | 5 |
of our experiments | 5 |
set of queries | 5 |
the results obtained | 5 |
is possible to | 5 |
be interested in | 5 |
in the result | 5 |
information retrieval and | 5 |
is equivalent to | 5 |
representations of the | 5 |
report the results | 5 |
the mixed bantu | 5 |
for sentiment analysis | 5 |
to define the | 5 |
a small number | 5 |
specific sentiment lexicon | 5 |
sentences in the | 5 |
allows users to | 5 |
wachsmuth et al | 5 |
the subset of | 5 |
to optimize the | 5 |
from the trec | 5 |
documents with respect | 5 |
model for the | 5 |
is a very | 5 |
the higher the | 5 |
to evaluate our | 5 |
to extend the | 5 |
the real world | 5 |
is trained to | 5 |
on the same | 5 |
the explicit features | 5 |
used to extract | 5 |
this problem by | 5 |
canonical correlation analysis | 5 |
features such as | 5 |
and can be | 5 |
the vrss output | 5 |
the input space | 5 |
is applied to | 5 |
the overall sentiment | 5 |
the existing lexicon | 5 |
using a set | 5 |
will be used | 5 |
present in the | 5 |
visual representation of | 5 |
of the test | 5 |
of web domains | 5 |
dimension of the | 5 |
to answer rq | 5 |
the counterfactual evaluation | 5 |
commonly used in | 5 |
a study of | 5 |
latent semantic analysis | 5 |
is to learn | 5 |
we should abandon | 5 |
q and q | 5 |
lab on automatic | 5 |
a result of | 5 |
improve the performance | 5 |
is the most | 5 |
the usage of | 5 |
able to reproduce | 5 |
that we use | 5 |
neural ranking models | 5 |
the learning of | 5 |
trotman et al | 5 |
insights extraction pipeline | 5 |
a way to | 5 |
to investigate the | 5 |
a deep relevance | 5 |
by the user | 5 |
used to predict | 5 |
of each query | 5 |
we have proposed | 5 |
work was supported | 5 |
the evaluation forum | 5 |
generative adversarial networks | 5 |
to obtain the | 5 |
in the final | 5 |
node in the | 5 |
users and items | 5 |
a sample of | 5 |
to find the | 5 |
negative matrix factorization | 5 |
of text and | 5 |
to improve the | 5 |
in the image | 5 |
of the models | 5 |
the assessment of | 5 |
it is a | 5 |
and item embeddings | 5 |
during topic modeling | 5 |
the three languages | 5 |
hierarchical attention network | 5 |
partially supported by | 5 |
it is possible | 5 |
be used for | 5 |
in a network | 5 |
of graded disease | 5 |
cruzado and castells | 5 |
one of its | 5 |
to the local | 5 |
the generated schema | 5 |
in the entire | 5 |
we provide a | 5 |
the latent topic | 5 |
a large corpus | 5 |
reviews in the | 5 |
results from the | 5 |
and a symptom | 5 |
a mixture of | 5 |
number of neighbors | 5 |
for readability analysis | 5 |
of von mises | 5 |
prediction on the | 5 |
q i in | 5 |
that we can | 5 |
and use the | 5 |
train and test | 5 |
to do so | 5 |
the base model | 5 |
the results are | 5 |
c i and | 5 |
a survey of | 5 |
weighted sum of | 5 |
they can be | 5 |
recurrent neural network | 5 |
social media platforms | 5 |
review helpfulness prediction | 5 |
to select a | 5 |
a document as | 5 |
the bias score | 5 |
relevant to a | 5 |
in the figure | 5 |
corresponding to the | 5 |
the degree of | 5 |
context of a | 5 |
of the product | 5 |
from which we | 5 |
results in table | 5 |
history of writings | 5 |
a review text | 5 |
can see that | 5 |
formulated as a | 5 |
is shown in | 5 |
to ensure that | 5 |
in cases where | 5 |
multimodal deep learning | 5 |
for web search | 5 |
approximated review embedding | 5 |
posting lists are | 5 |
risk prediction on | 5 |
used in previous | 5 |
the same cluster | 5 |
the most similar | 5 |
our model can | 5 |
social media data | 5 |
in bold font | 5 |
the ir axioms | 5 |
million pmc articles | 5 |
by the system | 5 |
are likely to | 5 |
see that the | 5 |
the stances of | 5 |
information about the | 5 |
length in the | 5 |
explained in sect | 5 |
the attention mechanism | 5 |
the category ac | 5 |
to combine the | 5 |
i in the | 5 |
followed by a | 5 |
with the query | 5 |
lingual information retrieval | 5 |
a probability distribution | 5 |
outperforms the other | 5 |
from multiple modalities | 5 |
information retrieval evaluation | 5 |
there are several | 5 |
we have presented | 5 |
and product information | 5 |
structure of the | 5 |
chemical reactions from | 5 |
concatenation of all | 5 |
compare our model | 5 |
and candidate users | 5 |
used to train | 5 |
the difference is | 5 |
target and candidate | 5 |
as explained in | 5 |
table reports the | 5 |
of dataset retrieval | 5 |
from the original | 5 |
when they are | 5 |
obtained from the | 5 |
part of a | 5 |
query expansion using | 5 |
alam et al | 5 |
the focus of | 5 |
result of the | 5 |
the issue of | 5 |
discounted cumulative gain | 5 |
over chemical reactions | 5 |
the standard insights | 5 |
of the article | 5 |
number of queries | 5 |
of the authors | 5 |
yelp data set | 5 |
it as a | 5 |
a preliminary evaluation | 5 |
the selection of | 5 |
to the other | 5 |
capitalism and war | 5 |
of this work | 5 |
is to retrieve | 5 |
it is also | 5 |
the state of | 5 |
a dataset for | 5 |
of its views | 5 |
set of seed | 5 |
of retrieval models | 5 |
assume that the | 5 |
to score the | 5 |
aspects of bias | 5 |
nature of the | 5 |
the fast retrieval | 5 |
the result of | 5 |
in an unsupervised | 5 |
our model with | 5 |
to the question | 5 |
and how the | 5 |
of the candidate | 5 |
as compared to | 5 |
supporting and attacking | 5 |
large body of | 5 |
information from different | 5 |
accuracy of the | 5 |
sentences and documents | 5 |
seed words is | 5 |
the relation between | 5 |
with a new | 5 |
the majority of | 5 |
processed by the | 5 |
of the other | 5 |
for each user | 5 |
the cooccur method | 5 |
to a specific | 5 |
the target product | 5 |
the absence of | 5 |
the score function | 5 |
model on the | 5 |
to each query | 5 |
item and user | 5 |
figure shows the | 5 |
to be close | 5 |
the influence of | 5 |
text and images | 5 |
in the local | 5 |
by one of | 5 |
to enhance the | 5 |
can be interpreted | 5 |
results in a | 5 |
the next step | 5 |
are used for | 5 |
early risk detection | 5 |
for early risk | 5 |
for schema label | 5 |
and does not | 5 |
d is the | 5 |
the original pagerank | 5 |
and show that | 5 |
the original query | 5 |
more than one | 5 |
in the set | 5 |
the probability distribution | 5 |
also report the | 5 |
and they are | 5 |
the readability analysis | 5 |
k is the | 5 |
task as a | 5 |
comparison to the | 5 |
networks for sentence | 5 |
performance of retrieval | 5 |
of each sentence | 5 |
conditionally on the | 5 |
in the input | 5 |
are not available | 5 |
fully connected layer | 5 |
effectiveness and efficiency | 5 |
the value function | 5 |
the golden domains | 5 |
word embedding models | 5 |
of the user | 5 |
counterfactual learning to | 5 |
assigned by the | 5 |
in of the | 5 |
can lead to | 5 |
based on word | 5 |
our approach on | 5 |
is derived from | 5 |
given by the | 5 |
have been developed | 5 |
table retrieval task | 5 |
g and p | 5 |
they do not | 5 |
and only if | 5 |
documents to be | 5 |
of the number | 5 |
parts of the | 5 |
to a new | 5 |
the support score | 5 |
have proposed a | 5 |
to rank for | 5 |
not need to | 5 |
on irony detection | 4 |
evaluation of information | 4 |
to be relevant | 4 |
by latent semantic | 4 |
the paper is | 4 |
does not require | 4 |
terms in a | 4 |
can be applied | 4 |
approach can be | 4 |
schema labels for | 4 |
information needs that | 4 |
is also a | 4 |
in long tasks | 4 |
associated to the | 4 |
applied to a | 4 |
of clef ehealth | 4 |
is one of | 4 |
the same set | 4 |
we create a | 4 |
and we use | 4 |
a range of | 4 |
a specific ab | 4 |
with each other | 4 |
neural networks with | 4 |
in this way | 4 |
used the same | 4 |
meaning of the | 4 |
words and topics | 4 |
of neural re | 4 |
set the number | 4 |
over the baseline | 4 |
networks for text | 4 |
and the task | 4 |
predict the review | 4 |
editions of the | 4 |
is carried out | 4 |
of the pubmed | 4 |
to bantu languages | 4 |
bias goggles model | 4 |
the complexity of | 4 |
the neural models | 4 |
t rooted at | 4 |
a specific bc | 4 |
training data set | 4 |
and the baselines | 4 |
inspired by the | 4 |
we assume the | 4 |
requirements for a | 4 |
from a review | 4 |
exact and semantic | 4 |
learning of the | 4 |
statistical classification algorithms | 4 |
this ensures that | 4 |
the implementation of | 4 |
with the adam | 4 |
filter out the | 4 |
similarity search for | 4 |
transfer learning is | 4 |
models are trained | 4 |
to approximate a | 4 |
be the set | 4 |
address the problem | 4 |
have to be | 4 |
sentiment of the | 4 |
we consider two | 4 |
removed from the | 4 |
of labeled data | 4 |
is obtained by | 4 |
will focus on | 4 |
in this category | 4 |
with subword information | 4 |
the story of | 4 |
as can be | 4 |
i are the | 4 |
the spoken queries | 4 |
a regression model | 4 |
publicly available dataset | 4 |
neural ir models | 4 |
our goal is | 4 |
as sd c | 4 |
relevance matching model | 4 |
representation learning with | 4 |
according to its | 4 |
the same stance | 4 |
can be formulated | 4 |
a new corpus | 4 |
were able to | 4 |
edge weight constraints | 4 |
was found to | 4 |
in different languages | 4 |
number of topics | 4 |
instead of the | 4 |
be applied to | 4 |
the dmp is | 4 |
to seed words | 4 |
based model for | 4 |
validate our approach | 4 |
for each restaurant | 4 |
schema label representations | 4 |
common standards model | 4 |
for users to | 4 |
from the product | 4 |
differences between the | 4 |
learning from logged | 4 |
the same topic | 4 |
premises from the | 4 |
the noun class | 4 |
semantic term matching | 4 |
number of sentences | 4 |
scale reproducibility study | 4 |
all terms in | 4 |
set of nodes | 4 |
there are two | 4 |
is illustrated in | 4 |
a simple way | 4 |
factoid qa datasets | 4 |
to be useful | 4 |
the trec robust | 4 |
to the base | 4 |
open domain suggestion | 4 |
conditional on the | 4 |
given a sequence | 4 |
according to this | 4 |
of search engines | 4 |
we also include | 4 |
for link prediction | 4 |
where the support | 4 |
visual saliency recall | 4 |
an extension of | 4 |
detailed in sect | 4 |
to make it | 4 |
be provided to | 4 |
is the total | 4 |
the stepwise recipe | 4 |
to detect irony | 4 |
of sentences in | 4 |
attention is all | 4 |
extensively used in | 4 |
we have shown | 4 |
shown to be | 4 |
a consequence of | 4 |
the results from | 4 |
a theoretical analysis | 4 |
efficiency of the | 4 |
of the result | 4 |
evaluation lab overview | 4 |
with a large | 4 |
can improve the | 4 |
t is the | 4 |
to the two | 4 |
path length in | 4 |
data management plan | 4 |
are statistically significant | 4 |
can be very | 4 |
of information is | 4 |
national research fund | 4 |
function v d | 4 |
norm of the | 4 |
instead of a | 4 |
a user may | 4 |
for document clustering | 4 |
the test data | 4 |
is likely to | 4 |
than the other | 4 |
of the jth | 4 |
to train a | 4 |
and found that | 4 |
term in the | 4 |
also find that | 4 |
set of features | 4 |
are similar to | 4 |
we argue that | 4 |
mixed ranking model | 4 |
that they would | 4 |
the retrieved documents | 4 |
this results in | 4 |
avg sim of | 4 |
the main contributions | 4 |
each word w | 4 |
the nature of | 4 |
the inverted index | 4 |
because of their | 4 |
combination of the | 4 |
their both views | 4 |
to train the | 4 |
to verify that | 4 |
plan to explore | 4 |
are semantically related | 4 |
possibly correct answer | 4 |
a stream of | 4 |
word embedding techniques | 4 |
the vector representation | 4 |
to establish a | 4 |
matrix factorization for | 4 |
an ontology can | 4 |
the proposed method | 4 |
the current ranker | 4 |
set of document | 4 |
retrieved from the | 4 |
a text query | 4 |
point of view | 4 |
the advantage of | 4 |
set of users | 4 |
task of the | 4 |
approximate a review | 4 |
since there are | 4 |
based evaluation of | 4 |
of document language | 4 |
is the maximum | 4 |
exactly the same | 4 |
associated with the | 4 |
the first to | 4 |
been proposed for | 4 |
the traditional dbgd | 4 |
between the two | 4 |
local contexts of | 4 |
sentiment analysis methods | 4 |
methods have been | 4 |
model to predict | 4 |
and the corresponding | 4 |
query q k | 4 |
than the original | 4 |
the first step | 4 |
objective is to | 4 |
users tend to | 4 |
for training the | 4 |
seed words per | 4 |
of the different | 4 |
the ground truth | 4 |
both text and | 4 |
that exploit the | 4 |
from the seeds | 4 |
of a set | 4 |
the language modeling | 4 |
and structural information | 4 |
information retrieval research | 4 |
paper is organized | 4 |
of text for | 4 |
compare our proposed | 4 |
proposed approach is | 4 |
organized as follows | 4 |
is relevant if | 4 |
this is an | 4 |
a review embedding | 4 |
our vrss model | 4 |
the global context | 4 |
when using the | 4 |
that have the | 4 |
given a query | 4 |
of the information | 4 |
the review volume | 4 |
of exploring the | 4 |
the corresponding generator | 4 |
dynamic pruning strategies | 4 |
the labels of | 4 |
followed by the | 4 |
variational recurrent neural | 4 |
and the query | 4 |
to rank features | 4 |
not available at | 4 |
the training of | 4 |
hit path set | 4 |
results for the | 4 |
submitted by the | 4 |
the biased concept | 4 |
on a dataset | 4 |
the same domain | 4 |
a lot of | 4 |
proposed a model | 4 |
a ranking problem | 4 |
one way to | 4 |
a user query | 4 |
candidate rankers are | 4 |
we did not | 4 |
of the lab | 4 |
on word embeddings | 4 |
the generation of | 4 |
to the participants | 4 |
and show the | 4 |
a latent space | 4 |
premises for a | 4 |
large corpus of | 4 |
each document is | 4 |
representations of a | 4 |
the content of | 4 |
keywords of medical | 4 |
all you need | 4 |
of the questions | 4 |
search engines and | 4 |
and product attention | 4 |
the dimension of | 4 |
of the cases | 4 |
bm and beyond | 4 |
chen et al | 4 |
six bantu languages | 4 |
topics and relevance | 4 |
to get the | 4 |
the same semantic | 4 |
based approach to | 4 |
be due to | 4 |
on two datasets | 4 |
for this purpose | 4 |
term mismatch problem | 4 |
share the same | 4 |
the recommendation task | 4 |
table summarizes the | 4 |
the relevance filtering | 4 |
the performances of | 4 |
of the framework | 4 |
precision and recall | 4 |
edition of the | 4 |
function of the | 4 |
a method to | 4 |
an ablation study | 4 |
of each product | 4 |
partial score upperbound | 4 |
in the last | 4 |
for that purpose | 4 |
we calculate the | 4 |
for a document | 4 |
a target user | 4 |
and used to | 4 |
the probabilistic relevance | 4 |
different media forms | 4 |
the availability of | 4 |
the need for | 4 |
claims in political | 4 |
representations of documents | 4 |
it is essential | 4 |
transfer learning for | 4 |
the logging ranker | 4 |
counterfactual risk minimization | 4 |
word embedding as | 4 |
of the three | 4 |
for further details | 4 |
average number of | 4 |
features of the | 4 |
the seq seq | 4 |
a long history | 4 |
the concept of | 4 |
we also find | 4 |
images that are | 4 |
of each document | 4 |
at a time | 4 |
the text is | 4 |
words that are | 4 |
existing lexicon to | 4 |
knowledge base completion | 4 |
and neural models | 4 |
to automate the | 4 |
answer to the | 4 |
in the area | 4 |
of word representations | 4 |
to both express | 4 |
which is used | 4 |
for query expansion | 4 |
semantic matching in | 4 |
the text input | 4 |
language modeling approach | 4 |
that the system | 4 |
along the search | 4 |
the data and | 4 |
the best results | 4 |
we can also | 4 |
and image modalities | 4 |
at the word | 4 |
same set of | 4 |
simple yet effective | 4 |
corpus c i | 4 |
from the qatar | 4 |
the algorithms in | 4 |
of the weights | 4 |
goal of the | 4 |
for the document | 4 |
number of epochs | 4 |
trained word embeddings | 4 |
approaches based on | 4 |
the original version | 4 |
for a review | 4 |
for dataset retrieval | 4 |
between the target | 4 |
so as to | 4 |
of biased concepts | 4 |
it is more | 4 |
if and only | 4 |
different types of | 4 |
a document with | 4 |
a framework for | 4 |
the average number | 4 |
the review text | 4 |
manual seed words | 4 |
word representations in | 4 |
probabilistic relevance framework | 4 |
have a higher | 4 |
from different sources | 4 |
used in this | 4 |
and their corresponding | 4 |
to each other | 4 |
that relies on | 4 |
reactions from patents | 4 |
from the set | 4 |
funded by the | 4 |
ceur workshop proceedings | 4 |
used for training | 4 |
databases for rapid | 4 |
field document ranking | 4 |
have been made | 4 |
to obtain a | 4 |
is then fed | 4 |
to show that | 4 |
from review text | 4 |
of the f | 4 |
evaluation of graded | 4 |
for recommender systems | 4 |
the previous section | 4 |
the same number | 4 |
guided deep document | 4 |
documents to the | 4 |
task asks to | 4 |
see for further | 4 |
supported in part | 4 |
in the future | 4 |
simple english wikipedia | 4 |
being able to | 4 |
deep relevance matching | 4 |
the same way | 4 |
models and the | 4 |
the computation graph | 4 |
by cond gans | 4 |
on the idea | 4 |
training and validation | 4 |
as input for | 4 |
supporting information from | 4 |
the loss of | 4 |
to the one | 4 |
available in the | 4 |
ranking based on | 4 |
and relevance judgments | 4 |
extraction over chemical | 4 |
the tf component | 4 |
the values of | 4 |
work can be | 4 |
can be done | 4 |
baselines on the | 4 |
each step of | 4 |
in vector space | 4 |
deep learning models | 4 |
and image models | 4 |
of the association | 4 |
we remove the | 4 |
incremental approach for | 4 |
will be provided | 4 |
according to eq | 4 |
query term in | 4 |
sequence of images | 4 |
that in the | 4 |
and topic modeling | 4 |
the offline performance | 4 |
for a single | 4 |
and natural language | 4 |
as a sequence | 4 |
is organized as | 4 |
gating mechanism to | 4 |
word embeddings to | 4 |
indexing by latent | 4 |
the search stages | 4 |
as a multi | 4 |
of the users | 4 |
to the score | 4 |
has been done | 4 |
relevant information from | 4 |
of document passages | 4 |
deep document clustering | 4 |
of annotated data | 4 |
where the missing | 4 |
number of parameters | 4 |
that has been | 4 |
by the participants | 4 |
lexicon l i | 4 |
and c have | 4 |
of the underlying | 4 |
we used a | 4 |
our approach is | 4 |
we show that | 4 |
of the th | 4 |
past few years | 4 |
var times add | 4 |
the clustering of | 4 |
to the product | 4 |
at query time | 4 |
that are not | 4 |
for the former | 4 |
hoc information retrieval | 4 |
focus only on | 4 |
they have been | 4 |
number of posting | 4 |
all datasets and | 4 |
of title and | 4 |
note that we | 4 |
of the top | 4 |
to the sum | 4 |
of the biased | 4 |
and use it | 4 |
other sentences in | 4 |
an example of | 4 |
prior work on | 4 |
in knowledge graphs | 4 |
same number of | 4 |
our model outperforms | 4 |
participants said that | 4 |
of claims in | 4 |
the dataset retrieval | 4 |
we apply a | 4 |
a similar approach | 4 |
evaluated on the | 4 |
the same data | 4 |
node features are | 4 |
schema label features | 4 |
is necessary to | 4 |
a single verb | 4 |
to process the | 4 |
the user embedding | 4 |
english and arabic | 4 |
the qatar national | 4 |
exist in the | 4 |
experiments were conducted | 4 |
relevance judgments from | 4 |
the usefulness of | 4 |
find that the | 4 |
the bias characteristics | 4 |
the cosine distance | 4 |
recent advances in | 4 |
information of the | 4 |
the most successful | 4 |
from logged bandit | 4 |
leads to the | 4 |
the norm of | 4 |
ari acc ari | 4 |
implementation of the | 4 |
acc ari acc | 4 |
in long sessions | 4 |
a description of | 4 |
text in the | 4 |
data collected from | 4 |
images and text | 4 |
of web pages | 4 |
also use the | 4 |
into a latent | 4 |
we design a | 4 |
we follow the | 4 |
in the pipeline | 4 |
rely on the | 4 |
predicting the next | 4 |
of the discriminator | 4 |
embedding to the | 4 |
where k is | 4 |
was supported in | 4 |
answer passage retrieval | 4 |
automate the assessment | 4 |
we construct the | 4 |
zhang and balog | 4 |
note that this | 4 |
is the bias | 4 |
contributions of this | 4 |
as a graph | 4 |
r is the | 4 |
cambridge english exam | 4 |
is predicted as | 4 |
demonstrate the effectiveness | 4 |
with a single | 4 |
compute the semantic | 4 |
next query prediction | 4 |
basic retrieval models | 4 |
rooted at n | 4 |
the pivoted normalization | 4 |
we train on | 4 |
on the amazon | 4 |
the best performance | 4 |
of the support | 4 |
in this study | 4 |
ewc and ewc | 4 |
terms in documents | 4 |
of the cooccur | 4 |
of the approach | 4 |
this additional supporting | 4 |
can be obtained | 4 |
an existing lexicon | 4 |
representations of sentences | 4 |
the other methods | 4 |
set of all | 4 |
structural information of | 4 |
in the computation | 4 |
the sentence level | 4 |
support vector machine | 4 |
k and b | 4 |
latent representations of | 4 |
reuters rcv rcv | 4 |
the sequence of | 4 |
by adding a | 4 |
are set to | 4 |
a very large | 4 |
all the words | 4 |
of the nodes | 4 |
the hierarchical structure | 4 |
seen as a | 4 |
neural network to | 4 |
on early detection | 4 |
is all you | 4 |
the percentage of | 4 |
to our knowledge | 4 |
the document length | 4 |
concatenation of title | 4 |
the location mention | 4 |
comprises of the | 4 |
as future work | 4 |
by leveraging the | 4 |
between pairs of | 4 |
of the methods | 4 |
the term mismatch | 4 |
average of all | 4 |
on the accuracy | 4 |
complex grammatical structure | 4 |
can provide a | 4 |
bias characteristics of | 4 |
to get a | 4 |
of the time | 4 |
and morphological analysis | 4 |
counterfactual online learning | 4 |
the latent representations | 4 |
set of documents | 4 |
k k k | 4 |
in the literature | 4 |
the two previous | 4 |
is generated by | 4 |
keyphrase extraction from | 4 |
a ranker from | 4 |
the interactions between | 4 |
extraction of chemical | 4 |
of the most | 4 |
that have been | 4 |
and information retrieval | 4 |
robertson et al | 4 |
example of the | 4 |
related to a | 4 |
used to make | 4 |
the distance between | 4 |
than that of | 4 |
the discovery of | 4 |
other types of | 4 |
seed words are | 4 |
labels can be | 4 |
are far from | 4 |
matching in information | 4 |
the efficiency of | 4 |
participants stated that | 4 |
of documents to | 4 |
by comparing the | 4 |
is essential for | 4 |
address this problem | 4 |
the extraction of | 4 |
affected by the | 4 |
for document classification | 4 |
we have the | 4 |
is considered as | 4 |
case where the | 4 |
from incomplete judgments | 4 |
is composed of | 4 |
socialism and war | 4 |
of the queries | 4 |
have been extensively | 4 |
and precision at | 4 |
we take the | 4 |
a query q | 4 |
the local relevance | 4 |
weight of the | 4 |
when applied to | 4 |
trained on a | 4 |
in times of | 4 |
to differentiate between | 4 |
refer to as | 4 |
a linear combination | 4 |
of sd c | 4 |
for the first | 4 |
the deep learning | 4 |
the candidate users | 4 |
to show to | 4 |
on the dataset | 4 |
amount of training | 4 |
a given query | 4 |
deep learning based | 4 |
in political debates | 4 |
to the data | 4 |
the simplicity of | 4 |
wu et al | 4 |
hypothesize that the | 4 |
focus on a | 4 |
the difference in | 4 |
out of the | 4 |
the participants had | 4 |
the neural model | 4 |
the query reformulation | 4 |
on the learned | 4 |
we address the | 4 |
variant of the | 4 |
the item and | 4 |
composed of a | 4 |
as defined in | 4 |
compatible with the | 4 |
of the set | 4 |
that for the | 4 |
s k k | 4 |
of the embedding | 4 |
from the previous | 4 |
of the conference | 4 |
qatar national research | 4 |
information from other | 4 |
of each passage | 4 |
in this model | 4 |
to be more | 4 |
table presents the | 4 |
the datasets used | 4 |
be close to | 4 |
found in the | 4 |
as an ontology | 4 |
for the category | 4 |
which does not | 4 |
number of iterations | 4 |
for which we | 4 |
the relevance scores | 4 |
some of them | 4 |
query reformulation patterns | 4 |
the union of | 4 |
neural ranking model | 4 |
appears to be | 4 |
a disease and | 4 |
and the other | 4 |
in the language | 4 |
model and the | 4 |
concatenation of the | 4 |
to the ones | 4 |
learn the latent | 4 |
is represented by | 4 |
a large body | 4 |
the review is | 4 |
of an article | 4 |
sequence of text | 4 |
to label the | 4 |
of the regression | 4 |
for evaluation purposes | 4 |
the purpose of | 4 |
in this experiment | 4 |
pairs of documents | 4 |
which aims to | 4 |
characteristics of web | 4 |
of the web | 4 |
detect irony in | 4 |
interest in the | 4 |
order to avoid | 3 |
the first layer | 3 |
in the corresponding | 3 |
zamani and croft | 3 |
approaches have been | 3 |
estimation of word | 3 |
number of tokens | 3 |
a larger set | 3 |
an effective way | 3 |
sequential semantic structure | 3 |
and the most | 3 |
of each model | 3 |
bipartite graph g | 3 |
tables and show | 3 |
from a large | 3 |
learned by the | 3 |
automatic keyphrase extraction | 3 |
the idf component | 3 |
our proposed method | 3 |
of transrev is | 3 |
with a missing | 3 |
sentence representation is | 3 |
is to be | 3 |
more diverse and | 3 |
that our approach | 3 |
the most commonly | 3 |
test with a | 3 |
denotes the set | 3 |
w and t | 3 |
as an additional | 3 |
in computer vision | 3 |
in previous studies | 3 |
we are not | 3 |
for the training | 3 |
we extract the | 3 |
products and their | 3 |
from top to | 3 |
lab overview of | 3 |
the retrieved image | 3 |
reviews from the | 3 |
techniques for recommender | 3 |
a novel neural | 3 |
and evaluated on | 3 |
relevance filtering modules | 3 |
be attributed to | 3 |
was evaluated on | 3 |
the relationship of | 3 |
national science foundation | 3 |
and schema labels | 3 |
make use of | 3 |
of these two | 3 |
ranging from to | 3 |
of damage present | 3 |
of the collection | 3 |
run times are | 3 |
the k parameter | 3 |
of the seed | 3 |
expansion using word | 3 |
identification of the | 3 |
a review can | 3 |
domain dom to | 3 |
the target query | 3 |
from our experiments | 3 |
for stepwise illustration | 3 |
encode the local | 3 |
neural language models | 3 |
similarly to the | 3 |
the dataset for | 3 |
recurrent convolutional neural | 3 |
human generated machine | 3 |
p rod and | 3 |
of sentiment analysis | 3 |
in previous works | 3 |
asks to predict | 3 |
a knowledge graph | 3 |
task will be | 3 |
the exception of | 3 |
introduce a novel | 3 |
standard image and | 3 |
of the tasks | 3 |
baselines in terms | 3 |
information retrieval based | 3 |
by machine translation | 3 |
a partial score | 3 |
information can be | 3 |
of a single | 3 |
of the algorithms | 3 |
semantic meanings of | 3 |
were asked to | 3 |
word w ij | 3 |
large datasets of | 3 |
we test on | 3 |
image does not | 3 |
the model for | 3 |
value of k | 3 |
ecosystem for producing | 3 |
and linear threshold | 3 |
to reproduce the | 3 |
for document and | 3 |
of the last | 3 |
the greek web | 3 |
we were able | 3 |
network and the | 3 |
readability of articles | 3 |
retrieval in the | 3 |
bias scores of | 3 |
the methods used | 3 |
of deep learning | 3 |
since the same | 3 |
a single document | 3 |
r k represents | 3 |
features generated from | 3 |
words and sentences | 3 |
in the scoring | 3 |
has also been | 3 |
embedding of a | 3 |
the previous queries | 3 |
related to length | 3 |
by gsra grant | 3 |
the main components | 3 |
the vector space | 3 |
a document and | 3 |
of the madmp | 3 |
by the european | 3 |
and their attributes | 3 |
application of the | 3 |
document retrieval using | 3 |
we index the | 3 |
prediction of online | 3 |
of an ontology | 3 |
to experiment with | 3 |
be used as | 3 |
deal with the | 3 |
one can then | 3 |
computed as the | 3 |
in a story | 3 |
building on the | 3 |
premises in the | 3 |
information from multiple | 3 |
to increase the | 3 |
and we have | 3 |
task on early | 3 |
regression model to | 3 |
i t is | 3 |
the cluster representatives | 3 |
textual description of | 3 |
we evaluate our | 3 |
composed of text | 3 |
the visual representation | 3 |
are given by | 3 |
have a positive | 3 |
on machine translation | 3 |
those related to | 3 |
to reduce the | 3 |
is contrary to | 3 |
the online performance | 3 |
in the metadata | 3 |
assess the quality | 3 |
we also provide | 3 |
the relief group | 3 |
results of this | 3 |
concludes the paper | 3 |
number of nodes | 3 |
document figure classification | 3 |
and we report | 3 |
observations with their | 3 |
along with a | 3 |
of reuters rcv | 3 |
the exploration of | 3 |
most relevant information | 3 |
of knowledge bases | 3 |
average improvement in | 3 |
of a given | 3 |
embedding at test | 3 |
generated by g | 3 |
and c are | 3 |
on all the | 3 |
propose to use | 3 |
the sentence s | 3 |
two previous queries | 3 |
the system needs | 3 |
presented in table | 3 |
model without the | 3 |
can be represented | 3 |
sentiment lexicon l | 3 |
joint conference on | 3 |
adam optimizer with | 3 |
be solved using | 3 |
and testing on | 3 |
of claims as | 3 |
on a subset | 3 |
neural networks from | 3 |
a publicly available | 3 |
neural networks that | 3 |
news and reuters | 3 |
qa pairs with | 3 |
of the current | 3 |
the same query | 3 |
is calculated as | 3 |
position of each | 3 |
the neighborhood of | 3 |
number of common | 3 |
social media users | 3 |
was used to | 3 |
validation and test | 3 |
the matrix of | 3 |
select a subset | 3 |
techniques such as | 3 |
if it is | 3 |
representations and the | 3 |
use early stopping | 3 |
images and video | 3 |
recurrent seq seq | 3 |
the claim and | 3 |
over all the | 3 |
in a multilingual | 3 |
between words and | 3 |
to query terms | 3 |
details of the | 3 |
of this article | 3 |
schema label generator | 3 |
significance of the | 3 |
set of web | 3 |
damage in the | 3 |
on the map | 3 |
which leads to | 3 |
both express and | 3 |
candidate user length | 3 |
our contributions are | 3 |
is described by | 3 |
dynamic tree cut | 3 |
query evaluation using | 3 |
allowing it to | 3 |
explainable artificial intelligence | 3 |
semantic structure of | 3 |
readability analysis task | 3 |
a single image | 3 |
that bert sw | 3 |
the latent representation | 3 |
be related to | 3 |
generated from a | 3 |
there are only | 3 |
a node to | 3 |
models for web | 3 |
users in a | 3 |
multiview learning with | 3 |
not included in | 3 |
the variation of | 3 |
a setting where | 3 |
we have to | 3 |
to produce recommendations | 3 |
no effect on | 3 |
proposed model for | 3 |
address the term | 3 |
k of the | 3 |
network to learn | 3 |
information retrieval an | 3 |
which in turn | 3 |
corresponding posting lists | 3 |
knowledge graph completion | 3 |
textual and visual | 3 |
to reproduce prf | 3 |
same as the | 3 |
graph attention networks | 3 |
than the previous | 3 |
list of premises | 3 |
noisy click settings | 3 |
term u i | 3 |
a small set | 3 |
the following three | 3 |
on political debates | 3 |
to the network | 3 |
as the baseline | 3 |
is common to | 3 |
similarity score is | 3 |
summarised in table | 3 |
pigd and pmgd | 3 |
it is necessary | 3 |
on how to | 3 |
the vocabulary of | 3 |
the raw co | 3 |
related document classification | 3 |
at this point | 3 |
may be due | 3 |
entity alignments in | 3 |
random walks on | 3 |
sources such as | 3 |
a neural network | 3 |
on extracting insights | 3 |
and the product | 3 |
with missing views | 3 |
the links between | 3 |
the top documents | 3 |
in a similar | 3 |
dataset search engines | 3 |
using clickthrough data | 3 |
the maximum value | 3 |
words per cluster | 3 |
speech tagging and | 3 |
word vectors of | 3 |
is a long | 3 |
user searching for | 3 |
research in the | 3 |
to be the | 3 |
more than tokens | 3 |
when compared to | 3 |
of each of | 3 |
that this is | 3 |
when we train | 3 |
to retrieve the | 3 |
we build upon | 3 |
using attention fusion | 3 |
k represents the | 3 |
a space with | 3 |
probabilistic ranking framework | 3 |
hoc ranking with | 3 |
for the two | 3 |
has shown to | 3 |
we considered the | 3 |
is the average | 3 |
are extracted from | 3 |
task neural learning | 3 |
the veracity of | 3 |
precision at rank | 3 |
of research papers | 3 |
have shown that | 3 |
up to three | 3 |
reading comprehension dataset | 3 |
ir axioms for | 3 |
information in the | 3 |
approaches that rely | 3 |
dependencies among the | 3 |
a model that | 3 |
and compare it | 3 |
previous queries in | 3 |
and query terms | 3 |
to the models | 3 |
of the learned | 3 |
all interdisciplinary actors | 3 |
common standards working | 3 |
with a user | 3 |
bantu languages are | 3 |
text and the | 3 |
kong et al | 3 |
randomly select of | 3 |
their resources has | 3 |
use them to | 3 |
in our mapping | 3 |
w is a | 3 |
gsra grant gsra | 3 |
want to update | 3 |
which consists of | 3 |
not publicly available | 3 |
of user and | 3 |
annotated with tasks | 3 |
the text content | 3 |
content of a | 3 |
the former case | 3 |
for a restaurant | 3 |
to show the | 3 |
an introduction to | 3 |
the center for | 3 |
to that effect | 3 |
for the word | 3 |
by using a | 3 |
is due to | 3 |
of them are | 3 |
in one language | 3 |
for parameter tuning | 3 |
seems to be | 3 |
story of how | 3 |
with the exception | 3 |
statistical language models | 3 |
it might be | 3 |
gradient descent algorithms | 3 |
order to ensure | 3 |
a member of | 3 |
the image in | 3 |
knowledge graph alignment | 3 |
with posting lists | 3 |
the dot product | 3 |
based collaborative filtering | 3 |
for the same | 3 |
automatic verification of | 3 |
of importance of | 3 |
socialism and peace | 3 |
by applying a | 3 |
training for the | 3 |
number of training | 3 |
of the sentiment | 3 |
this leads to | 3 |
model consists of | 3 |
word in the | 3 |
and snippet retrieval | 3 |
from table that | 3 |
for the annotation | 3 |
visual feature similarity | 3 |
the evaluation results | 3 |
of the ecosystem | 3 |
query run times | 3 |
correlated with the | 3 |
is essential to | 3 |
machine learning models | 3 |
automatic generation of | 3 |
number of reviews | 3 |
the helpfulness of | 3 |
based recommender systems | 3 |
adjusted rand index | 3 |
tokenized path t | 3 |
a measure of | 3 |
semantic matching model | 3 |
a better overview | 3 |
and long sessions | 3 |
of the products | 3 |
on amazon and | 3 |
the images retrieved | 3 |
the local contextual | 3 |
with the number | 3 |
term memory networks | 3 |
standard information retrieval | 3 |
and p real | 3 |
applied to the | 3 |
with the claim | 3 |
difference of the | 3 |
the top results | 3 |
all the other | 3 |
sparsity in the | 3 |
out of registered | 3 |
community interest in | 3 |
for reviews that | 3 |
the perspective of | 3 |
to retrieve reviews | 3 |
a comparative study | 3 |
as the ground | 3 |
is given in | 3 |
by three annotators | 3 |
the past few | 3 |
information retrieval heuristics | 3 |
top of the | 3 |
best results are | 3 |
as we are | 3 |
is in line | 3 |
capitalism and peace | 3 |
we define the | 3 |
graph that is | 3 |
majority of the | 3 |
for disaster management | 3 |
for each premise | 3 |
computed using the | 3 |
from both the | 3 |
models that we | 3 |
level attention for | 3 |
and fed into | 3 |
makes relevance decisions | 3 |
and for each | 3 |
difficulty of the | 3 |
to evaluate how | 3 |
for the corresponding | 3 |
in the test | 3 |
outperforms the baseline | 3 |
guided topic model | 3 |
these results are | 3 |
images from the | 3 |
the individual modules | 3 |
are selected from | 3 |
for table retrieval | 3 |
by computing the | 3 |
takes advantage of | 3 |
to the seed | 3 |
maxscore to block | 3 |
i aim to | 3 |
on the context | 3 |