Edinburgh Research Explorer Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis Citation for published version: Angelidis, S & Lapata, M 2018, 'Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis', Transactions of the Association for Computational Linguistics, vol. 6, pp. 17-32. Link: Link to publication record in Edinburgh Research Explorer Document Version: Publisher's PDF, also known as Version of record Published In: Transactions of the Association for Computational Linguistics General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Download date: 06. Apr. 2021 https://transacl.org/ojs/index.php/tacl/article/view/1225 https://www.research.ed.ac.uk/portal/en/publications/multiple-instance-learning-networks-for-finegrained-sentiment-analysis(9b105752-9eb8-4c59-8cad-2fbc13702c8d).html Multiple Instance Learning Networks for Fine-Grained Sentiment Analysis Stefanos Angelidis and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB s.angelidis@ed.ac.uk, mlap@inf.ed.ac.uk Abstract We consider the task of fine-grained senti- ment analysis from the perspective of multi- ple instance learning (MIL). Our neural model is trained on document sentiment labels, and learns to predict the sentiment of text seg- ments, i.e. sentences or elementary discourse units (EDUs), without segment-level supervi- sion. We introduce an attention-based polar- ity scoring method for identifying positive and negative text snippets and a new dataset which we call SPOT (as shorthand for Segment-level POlariTy annotations) for evaluating MIL- style sentiment models like ours. Experimen- tal results demonstrate superior performance against multiple baselines, whereas a judge- ment elicitation study shows that EDU-level opinion extraction produces more informative summaries than sentence-based alternatives. 1 Introduction Sentiment analysis has become a fundamental area of research in Natural Language Processing thanks to the proliferation of user-generated content in the form of online reviews, blogs, internet forums, and social media. A plethora of methods have been pro- posed in the literature that attempt to distill senti- ment information from text, allowing users and ser- vice providers to make opinion-driven decisions. The success of neural networks in a variety of ap- plications (Bahdanau et al., 2015; Le and Mikolov, 2014; Socher et al., 2013) and the availability of large amounts of labeled data have led to an in- creased focus on sentiment classification. Super- vised models are typically trained on documents (Johnson and Zhang, 2015a; Johnson and Zhang, 2015b; Tang et al., 2015; Yang et al., 2016), sen- tences (Kim, 2014), or phrases (Socher et al., 2011; [Rating: ??] I had a very mixed experience at The Stand. The burger and fries were good. The chocolate shake was divine: rich and creamy. The drive-thru was horrible. It took us at least 30 minutes to order when there were only four cars in front of us. We complained about the wait and got a half–hearted apology. I would go back because the food is good, but my only hesitation is the wait. S um m ar y + The burger and fries were good + The chocolate shake was divine + I would go back because the food is good – The drive-thru was horrible – It took us at least 30 minutes to order Figure 1: An EDU-based summary of a 2-out-of-5 star review with positive and negative snippets. Socher et al., 2013) annotated with sentiment la- bels and used to predict sentiment in unseen texts. Coarse-grained document-level annotations are rel- atively easy to obtain due to the widespread use of opinion grading interfaces (e.g., star ratings ac- companying reviews). In contrast, the acquisition of sentence- or phrase-level sentiment labels re- mains a laborious and expensive endeavor despite its relevance to various opinion mining applica- tions, e.g., detecting or summarizing consumer opin- ions in online product reviews. The usefulness of finer-grained sentiment analysis is illustrated in the example of Figure 1, where snippets of opposing po- larities are extracted from a 2-star restaurant review. Although, as a whole, the review conveys negative sentiment, aspects of the reviewer’s experience were clearly positive. This goes largely unnoticed when focusing solely on the review’s overall rating. In this work, we consider the problem of segment- level sentiment analysis from the perspective of Multiple Instance Learning (MIL; Keeler, 1991). 17 Transactions of the Association for Computational Linguistics, vol. 6, pp. 17–31, 2018. Action Editor: Ani Nenkova. Submission batch: 7/17; Revision batch: 11/2017; Published 1/2018. c©2018 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license. Instead of learning from individually labeled seg- ments, our model only requires document-level su- pervision and learns to introspectively judge the sen- timent of constituent segments. Beyond showing how to utilize document collections of rated reviews to train fine-grained sentiment predictors, we also investigate the granularity of the extracted segments. Previous research (Tang et al., 2015; Yang et al., 2016; Cheng and Lapata, 2016; Nallapati et al., 2017) has predominantly viewed documents as se- quences of sentences. Inspired by recent work in summarization (Li et al., 2016) and sentiment clas- sification (Bhatia et al., 2015), we also represent documents via Rhetorical Structure Theory’s (Mann and Thompson, 1988) Elementary Discourse Units (EDUs). Although definitions for EDUs vary in the literature, we follow standard practice and take the elementary units of discourse to be clauses (Carlson et al., 2003). We employ a state-of-the-art discourse parser (Feng and Hirst, 2012) to identify them. Our contributions in this work are three-fold: a novel multiple instance learning neural model which utilizes document-level sentiment supervision to judge the polarity of its constituent segments; the creation of SPOT, a publicly available dataset which contains Segment-level POlariTy annotations (for sentences and EDUs) and can be used for the eval- uation of MIL-style models like ours; and the em- pirical finding (through automatic and human-based evaluation) that neural multiple instance learning is superior to more conventional neural architectures and other baselines on detecting segment sentiment and extracting informative opinions in reviews.1 2 Background Our work lies at the intersection of multiple research areas, including sentiment classification, opinion mining and multiple instance learning. We review related work in these areas below. Sentiment Classification Sentiment classification is one of the most popular tasks in sentiment anal- ysis. Early work focused on unsupervised meth- ods and the creation of sentiment lexicons (Turney, 2002; Hu and Liu, 2004; Wiebe et al., 2005; Bac- cianella et al., 2010) based on which the overall po- 1Our code and SPOT dataset are publicly available at: https://github.com/stangelid/milnet-sent larity of a text can be computed (e,g., by aggregating the sentiment scores of constituent words). More re- cently, Taboada et al. (2011) introduced SO-CAL, a state-of-the-art method that combines a rich senti- ment lexicon with carefully defined rules over syn- tax trees to predict sentence sentiment. Supervised learning techniques have subse- quently dominated the literature (Pang et al., 2002; Pang and Lee, 2005; Qu et al., 2010; Xia and Zong, 2010; Wang and Manning, 2012; Le and Mikolov, 2014) thanks to user-generated sentiment labels or large-scale crowd-sourcing efforts (Socher et al., 2013). Neural network models in particular have achieved state-of-the-art performance on vari- ous sentiment classification tasks due to their abil- ity to alleviate feature engineering. Kim (2014) introduced a very successful CNN architecture for sentence-level classification, whereas other work (Socher et al., 2011; Socher et al., 2013) uses recur- sive neural networks to learn sentiment for segments of varying granularity (i.e., words, phrases, and sen- tences). We describe Kim’s (2014) approach in more detail as it is also used as part of our model. Let xi denote a k-dimensional word embedding of the i-th word in text segment s of length n. The segment’s input representation is the concatenation of word embeddings x1, . . . , xn, resulting in word matrix X. Let Xi:i+j refer to the concatenation of embeddings xi, . . . , xi+j. A convolution filter W ∈ Rlk, applied to a window of l words, produces a new feature ci = ReLU(W ◦ Xi:i+l + b), where ReLU is the Rectified Linear Unit non-linearity, ‘◦’ denotes the entrywise product followed by a sum over all elements and b ∈ R is a bias term. Ap- plying the same filter to every possible window of word vectors in the segment, produces a feature map c = [c1, c2, . . . , cn−l+1]. Multiple feature maps for varied window sizes are applied, resulting in a fixed-size segment representation v via max-over- time pooling. We will refer to the application of con- volution to an input word matrix X, as CNN(X). A final sentiment prediction is produced using a soft- max classifier and the model is trained via back- propagation using sentence-level sentiment labels. The availability of large-scale datasets (Diao et al., 2014; Tang et al., 2015) has also led to the de- velopment of document-level sentiment classifiers which exploit hierarchical neural representations. 18 These are obtained by first building representations of sentences and aggregating those into a document feature vector (Tang et al., 2015). Yang et al. (2016) further acknowledge that words and sentences are deferentially important in different contexts. They present a model which learns to attend (Bahdanau et al., 2015) to individual text parts when constructing document representations. We describe such an ar- chitecture in more detail as we use it as a point of comparison with our own model. Given document d comprising segments (s1, . . . , sm), a Hierarchical Network with at- tention (henceforth HIERNET; based on Yang et al., 2016) produces segment representations (v1, . . . , vm) which are subsequently fed into a bidirectional GRU module (Bahdanau et al., 2015), whose resulting hidden vectors (h1, . . . , hm) are used to produce attention weights (a1, . . . , am) (see Section 3.2 for more details on the attention mechanism). A document is represented as the weighted average of the segments’ hidden vec- tors vd = ∑ i aihi. A final sentiment prediction is obtained using a softmax classifier and the model is trained via back-propagation using document-level sentiment labels. The architecture is illustrated in Figure 2(a). In their proposed model, Yang et al. (2016) use bidirectional GRU modules to represent segments as well as documents, whereas we use a more efficient CNN encoder to compose words into segment vectors2 (i.e., vi = CNN(Xi)). Note that models like HIERNET do not naturally predict sentiment for individual segments; we discuss how they can be used for segment-level opinion extraction in Section 5.2. Our own work draws inspiration from represen- tation learning (Tang et al., 2015; Kim, 2014), es- pecially the idea that not all parts of a document convey sentiment-worthy clues (Yang et al., 2016). Our model departs from previous approaches in that it provides a natural way of predicting the polar- ity of individual text segments without requiring segment-level annotations. Moreover, our atten- tion mechanism directly facilitates opinion detection rather than simply aggregating sentence representa- tions into a single document vector. 2When applied to the YELP’13 and IMDB document clas- sification datasets, the use of CNNs results in a relative perfor- mance decrease of < 2% compared Yang et al’s model (2016). Opinion Mining A standard setting for opinion mining and summarization (Lerman et al., 2009; Carenini et al., 2006; Ganesan et al., 2010; Di Fab- brizio et al., 2014; Gerani et al., 2014) assumes a set of documents that contain opinions about some en- tity of interest (e.g., camera). The goal of the system is to generate a summary that is representative of the average opinion and speaks to its important aspects (e.g., picture quality, battery life, value). Output summaries can be extractive (Lerman et al., 2009) or abstractive (Gerani et al., 2014; Di Fabbrizio et al., 2014) and the underlying systems exhibit vary- ing degrees of linguistic sophistication from identi- fying aspects (Lerman et al., 2009) to using RST- style discourse analysis, and manually defined tem- plates (Gerani et al., 2014; Di Fabbrizio et al., 2014). Our proposed method departs from previous work in that it focuses on detecting opinions in individ- ual documents. Given a review, we predict the po- larity of every segment, allowing for the extrac- tion of sentiment-heavy opinions. We explore the usefulness of EDU segmentation inspired by Li et al. (2016), who show that EDU-based summaries align with near-extractive summaries constructed by news editors. Importantly, our model is trained in a weakly-supervised fashion on large scale docu- ment classification datasets without recourse to fine- grained labels or gold-standard opinion summaries. Multiple Instance Learning Our models adopt a Multiple Instance Learning (MIL) framework. MIL deals with problems where labels are associated with groups of instances or bags (documents in our case), while instance labels (segment-level polarities) are unobserved. An aggregation function is used to combine instance predictions and assign labels on the bag level. The goal is either to label bags (Keeler and Rumelhart, 1992; Dietterich et al., 1997; Maron and Ratan, 1998) or to simultaneously infer bag and instance labels (Zhou et al., 2009; Wei et al., 2014; Kotzias et al., 2015). We view segment-level senti- ment analysis as an instantiation of the latter variant. Initial MIL efforts for binary classification made the strong assumption that a bag is negative only if all of its instances are negative, and positive oth- erwise (Dietterich et al., 1997; Maron and Ratan, 1998; Zhang et al., 2002; Andrews and Hofmann, 2004; Carbonetto et al., 2008). Subsequent work re- 19 laxed this assumption, allowing for prediction com- binations better suited to the tasks at hand. Wei- dmann et al. (2003) introduced a generalized MIL framework, where a combination of instance types is required to assign a bag label. Zhou et al. (2009) used graph kernels to aggregate predictions, exploit- ing relations between instances in object and text categorization. Xu and Frank (2004) proposed a multiple-instance logistic regression classifier where instance predictions were simply averaged, assum- ing equal and independent contribution toward bag classification. More recently, Kotzias et al. (2015) used sentence vectors obtained by a pre-trained hi- erarchical CNN (Denil et al., 2014) as features un- der an unweighted average MIL objective. Predic- tion averaging was further extended by Pappas and Popescu-Belis (2014; 2017), who used a weighted summation of predictions, an idea which we also adopt in our work. Applications of MIL are many and varied. MIL was first explored by Keeler and Rumelhart (1992) for recognizing handwritten post codes, where the position and value of individual digits was unknown. MIL techniques have since been applied to drug ac- tivity prediction (Dietterich et al., 1997), image re- trieval (Maron and Ratan, 1998; Zhang et al., 2002), object detection (Zhang et al., 2006; Carbonetto et al., 2008; Cour et al., 2011), text classification (An- drews and Hofmann, 2004), image captioning (Wu et al., 2015), paraphrase detection (Xu et al., 2014), and information extraction (Hoffmann et al., 2011). When applied to sentiment analysis, MIL takes advantage of supervision signals on the document level in order to train segment-level sentiment pre- dictors. Although their work is not couched in the framework of MIL, Täckström and McDonald (2011) show how sentence sentiment labels can be learned as latent variables from document-level an- notations using hidden conditional random fields. Pappas and Popescu-Belis (2014) use a multiple in- stance regression model to assign sentiment scores to specific aspects of products. The Group-Instance Cost Function (GICF), proposed by Kotzias et al. (2015), averages sentence sentiment predictions dur- ing trainng, while ensuring that similar sentences receive similar polarity labels. Their work uses a pre-trained hierarchical CNN to obtain sentence em- beddings, but is not trainable end-to-end, in contrast with our proposed network. Additionally, none of the aforementioned efforts explicitly evaluate opin- ion extraction quality. 3 Methodology In this section we describe how multiple instance learning can be used to address some of the draw- backs seen in previous approaches, namely the need for expert knowledge in lexicon-based sentiment analysis (Taboada et al., 2011), expensive fine- grained annotation on the segment level (Kim, 2014; Socher et al., 2013) or the inability to naturally pre- dict segment sentiment (Yang et al., 2016). 3.1 Problem Formulation Under multiple instance learning (MIL), a dataset D is a collection of labeled bags, each of which is a group of unlabeled instances. Specifically, each document d is a sequence (bag) of segments (in- stances). This sequence d = (s1, s2, . . . , sm) is ob- tained from a document segmentation policy (see Section 4 for details). A discrete sentiment label yd ∈ [1, C] is associated with each document, where the labelset is ordered and classes 1 and C corre- spond to maximally negative and maximally posi- tive sentiment. It is assumed that yd is an unknown function of the unobserved segment-level labels: yd = f(y1, y2, . . . , ym) (1) Probabilistic sentiment classifiers will produce document-level predictions ŷd by selecting the most probable class according to class distribution pd = 〈p(1)d , . . . , p (C) d 〉. In a non-MIL framework a classifier would learn to predict the document’s sen- timent by directly conditioning on its segments’ fea- ture representations or their aggregate: pd = f̂θ(v1, v2, . . . , vm) (2) In contrast, a MIL classifier will produce a class dis- tribution pi for each segment and additionally learn to combine these into a document-level prediction: pi = ĝθs(vi) , (3) pd = f̂θd(p1, p2, . . . , pm) . (4) In this work, ĝ and f̂ are defined using a single neu- ral network, described below. 20 Figure 2: A Hierarchical Network (HIERNET) for document-level sentiment classification and our proposed Multiple Instance Learning Network (MILNET). The models use the same attention mechanism to combine segment vectors and predictions respectively. 3.2 Multiple Instance Learning Network Hierarchical neural models like HIERNET have been used to predict document-level polarity by first en- coding sentences and then combining these repre- sentations into a document vector. Hierarchical vec- tor composition produces powerful sentiment pre- dictors, but lacks the ability to introspectively judge the polarity of individual segments. Our Multiple Instance Learning Network (hence- forth MILNET) is based on the following intuitive assumptions about opinionated text. Each segment conveys a degree of sentiment polarity, ranging from very negative to very positive. Additionally, seg- ments have varying degrees of importance, in rela- tion to the overall opinion of the author. The overar- ching polarity of a text is an aggregation of segment polarities, weighted by their importance. Thus, our model attempts to predict the polarity of segments and decides which parts of the document are good indicators of its overall sentiment, allowing for the detection of sentiment-heavy opinions. An illustra- tion of MILNET is shown in Figure 2(b); the model consists of three components: a CNN segment en- coder, a softmax segment classifier and an attention- based prediction weighting module. Segment Encoding An encoding vi = CNN(Xi) is produced for each segment, using the CNN archi- tecture described in Section 2. Segment Classification Obtaining a separate rep- resentation vi for every segment in a document al- lows us to produce individual segment sentiment predictions pi = 〈p(1)i , . . . , p (C) i 〉. This is achieved using a softmax classifier: pi = softmax(Wcvi + bc) , (5) where Wc and bc are the classifier’s parameters, shared across all segments. Individual distributions pi are shown in Figure 2(b) as small bar-charts. Document Classification In the simplest case, document-level predictions can be produced by taking the average of segment class distributions: p (c) d = 1/m ∑ i p (c) i , c ∈ [1, C]. This is, however, a crude way of combining segment sentiment, as not all parts of a document convey important sentiment clues. We opt for a segment attention mechanism which rewards text units that are more likely to be good sentiment predictors. Our attention mechanism is based on a bidirec- tional GRU component (Bahdanau et al., 2015) and 21 The starters were quite bland. I didn’t enjoy most of them, but the burger was brilliant! 1 2 3 4 5 0.00 0.25 0.50 0.75 p ro b a b il it y att: 0.3 polarity −1 0 1 gtd-pol. 1 2 3 4 5 att: 0.2 −1 0 1 1 2 3 4 5 att: 0.5 −1 0 1 Figure 3: Polarity scores (bottom) obtained from class probability distributions for three EDUs (top) ex- tracted from a restaurant review. Attention weights (top) are used to fine-tune the obtained polarities. inspired by Yang et al. (2016). However, in con- trast to their work, where attention is used to com- bine sentence representations into a single document vector, we utilize a similar technique to aggregate individual sentiment predictions. We first use separate GRU modules to produce forward and backward hidden vectors, which are then concatenated: −→ h i = −−−→ GRU(vi), (6) ←− h i = ←−−− GRU(vi), (7) hi = [ −→ h i, ←− h i], i ∈ [1, m] . (8) The importance of each segment is measured with the aid of a vector ha, as follows: h′i = tanh(Wahi + ba) , (9) ai = exp(h′Ti ha)∑ i exp(h ′T i ha) , (10) where Equation (9) defines a one-layer MLP that produces an attention vector for the i-th segment. Attention weights ai are computed as the normal- ized similarity of each h′i with ha. Vector ha, which is randomly initialized and learned during training, can be thought of as a trained key, able to recognize sentiment-heavy segments. The attention mecha- nism is depicted in the dashed box of Figure 2, with attention weights shown as shaded circles. Finally, we obtain a document-level distribution over sentiment labels as the weighted sum of seg- ment distributions (see top of Figure 2(b)): p (c) d = ∑ i aip (c) i , c ∈ [1, C] . (11) Training The model is trained end-to-end on doc- uments with user-generated sentiment labels. We use the negative log likelihood of the document-level prediction as an objective function: L = − ∑ d log p (yd) d (12) 4 Polarity-based Opinion Extraction After training, our model can produce segment-level sentiment predictions for unseen texts in the form of class probability distributions. A direct application of our method is opinion extraction, where highly positive and negative snippets are selected from the original document, producing extractive sentiment summaries, as described below. Polarity Scoring In order to extract opinion sum- maries, we need to rank segments according to their sentiment polarity. We introduce a method that takes our model’s confidence in the prediction into ac- count, by reducing each segment’s class probability distribution pi to a single real-valued polarity score. To achieve this, we first define a real-valued class weight vector w = 〈w(1), . . . , w(C) |w(c) ∈ [−1, 1]〉 that assigns uniformly-spaced weights to the ordered labelset, such that w(c+1) −w(c) = 2 C−1 . For exam- ple, in a 5-class scenario, the class weight vector would be w = 〈−1,−0.5, 0, 0.5, 1〉. We compute the polarity score of a segment as the dot-product of the probability distribution pi with vector w: polarity(si) = ∑ c p (c) i w (c) ∈ [−1, 1] (13) Gated Polarity As a way of increasing the effec- tiveness of our method, we introduce a gated exten- sion that uses the attention mechanism of our model to further differentiate between segments that carry 22 significant sentiment cues and those that do not: gated-polarity(si) = ai ·polarity(si) , (14) where ai is the attention weight assigned to the i-th segment. This forces the polarity scores of segments the model does not attend to closer to 0. An illustration of our polarity scoring function is provided in Figure 3, where the class predic- tions (top) of three restaurant review segments are mapped to their corresponding polarity scores (bot- tom). We observe that our method produces the de- sired result; segments 1 and 2 convey negative senti- ment and receive negative scores, whereas the third segment is mapped to a positive score. Although the same discrete class label is assigned to the first two, the second segment’s score is closer to 0 (neutral) as its class probability mass is more evenly distributed. Segmentation Policies As mentioned earlier, one of the hypotheses investigated in this work regards the use of subsentential units as the basis of extrac- tion. Specifically, our model was applied to sen- tences and Elementary Discourse Units (EDUs), ob- tained from a Rhetorical Structure Theory (RST) parser (Feng and Hirst, 2012). According to RST, documents are first segmented into EDUs corre- sponding roughly to independent clauses which are then recursively combined into larger discourse spans. This results in a tree representation of the document, where connected nodes are characterized by discourse relations. We only utilize RST’s seg- mentation, and leave the potential use of the tree structure to future work. The example in Figure 3 illustrates why EDU- based segmentation might be beneficial for opinion extraction. The second and third EDUs correspond to the sentence: I didn’t enjoy most of them, but the burger was brilliant. Taken as a whole, the sentence conveys mixed sentiment, whereas the EDUs clearly convey opposing sentiment. 5 Experimental Setup In this section we describe the data used to assess the performance of our model. We also give details on model training and comparison systems. Yelp’13 IMDB Documents 335,018 348,415 Average #Sentences 8.90 14.02 Average #EDUs 19.11 37.38 Average #Words 152 325 Vocabulary Size 211,245 115,831 Classes 1–5 1–10 Table 1: Document-level sentiment classification datasets used to train our models. Yelp’13seg IMDBseg Sent. EDUs Sent. EDUs #Segments 1,065 2,110 1,029 2,398 #Documents 100 97 Classes {– , 0 , +} {– , 0 , +} Table 2: SPOT dataset: numbers of documents and segments with polarity annotations. 5.1 Datasets Our models were trained on two large-scale senti- ment classification collections. The Yelp’13 corpus was introduced in Tang et al. (2015) and contains customer reviews of local businesses, each associ- ated with human ratings on a scale from 1 (negative) to 5 (positive). The IMDB corpus of movie reviews was obtained from Diao et al. (2014); each review is associated with user ratings ranging from 1 to 10. Both datasets are split into training (80%), validation (10%) and test (10%) sets. A summary of statistics for each collection is provided in Table 1. In order to evaluate model performance on the segment level, we constructed a new dataset named SPOT (as a shorthand for Segment POlariTy) by annotating documents from the Yelp’13 and IMDB collections. Specifically, we sampled reviews from each collection such that all document-level classes are represented uniformly, and the document lengths are representative of the respective corpus. Docu- ments were segmented into sentences and EDUs, re- sulting in two segment-level datasets per collection. Statistics are summarized in Table 2. Each review was presented to three Amazon Me- chanical Turk (AMT) annotators who were asked to judge the sentiment conveyed by each segment (i.e., sentence or EDU) as negative, neutral, or pos- 23 1 2 3 4 5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 p ro p o rt io n o f se g m e n ts Yelp'13 - Sentences 1 2 3 4 5 Yelp'13 - EDUs negative neutral positive 1 2 3 4 5 6 7 8 9 10 document class 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 p ro p o rt io n o f se g m e n ts IMDB - Sentences 1 2 3 4 5 6 7 8 9 10 document class IMDB - EDUs Figure 4: Distribution of segment-level labels per document-level class on our the SPOT datasets. itive. We assigned labels using a majority vote or a fourth annotator in the rare cases of no agreement (< 5%). Figure 4 shows the distribution of segment labels for each document-level class. As expected, documents with positive labels contain a larger num- ber of positive segments compared to documents with negative labels and vice versa. Neutral seg- ments are distributed in an approximately uniform manner across document classes. Interestingly, the proportion of neutral EDUs is significantly higher compared to neutral sentences. The observation re- inforces our argument in favor of EDU segmenta- tion, as it suggests that a sentence with positive or negative overall polarity may still contain neutral EDUs. Discarding neutral EDUs, could therefore lead to more concise opinion extraction compared to relying on entire sentences. We further experimented on two collections intro- duced by Kotzias et al. (2015) which also originate from the YELP’13 and IMDB datasets. Each collec- tion consists of 1,000 randomly sampled sentences annotated with binary sentiment labels. 5.2 Model Comparison On the task of segment classification we compared MILNET, our multiple instance learning network, against the following methods: Majority: Majority class applied to all instances. SO-CAL: State-of-the-art lexicon-based system that classifies segments into positive, neutral, and negative classes (Taboada et al., 2011). Seg-CNN: Fully-supervised CNN segment classi- fier trained on SPOT’s labels (Kim, 2014). GICF: The Group-Instance Cost Function model introduced in Kotzias et al. (2015). This is an unweighted average prediction aggregation MIL method that uses sentence features from a pre- trained convolutional neural model. HIERNET: HIERNET does not explicitly generate individual segment predictions. Segment polarity scores are obtained by assigning the document- level prediction to every segment. We can then produce finer-grained polarity distinctions via gating, using the model’s attention weights. We further illustrate the differences between HI- ERNET and MILNET in Figure 5, which includes short descriptions and simplified equations for each model. MILNET naturally produces distinct seg- ment polarities, while HIERNET assigns a single po- larity score to every segment. In both cases, gating is a further means of identifying neutral segments. Finally, we differentiate between variants of HI- ERNET and MILNET according to: Polarity source: Controls whether we assign polar- ities via segment-specific or document-wide pre- dictions. HIERNET only allows for document- wide predictions. MILNET can use both. Attention: We use models without gating (no sub- script), with gating (gt subscript) as well as mod- els trained with the attention mechanism disabled, falling back to simple averaging (avg subscript). 5.3 Model Training and Evaluation We trained MILNET and HIERNET using Adadelta (Zeiler, 2012) for 25 epochs. Mini-batches of 200 documents were organized based on the reviews’ segment and document lengths so the amount of padding was minimized. We used 300-dimensional pre-trained word2vec embeddings. We tuned hyper- parameters on the validation sets of the document classification collections, resulting in the follow- ing configuration (unless otherwise noted). For the CNN segment encoder, we used window sizes of 3, 4 24 Figure 5: System pipelines for HIERNET and MILNET showing 4 distinct phases for sentiment analysis. and 5 words with 100 feature maps per window size, resulting in 300-dimensional segment vectors. The GRU hidden vector dimensions for each direction were set to 50 and the attention vector dimension- ality to 100. We used L2-normalization and dropout to regularize the softmax classifiers and additional dropout on the internal GRU connections. Real-valued polarity scores produced by the two models are mapped to discrete labels using two ap- propriate thresholds t1 , t2 ∈ [−1, 1], so that a seg- ment s is classified as negative if polarity(s) < t1, positive if polarity(s) > t2 or neutral otherwise.3 To evaluate performance, we use macro-averaged F1 which is unaffected by class imbalance. We select optimal thresholds using 10-fold cross-validation and report mean scores across folds. The fully-supervised convolutional segment clas- sifier (Seg-CNN) uses the same window size and feature map configuration as our segment encoder. Seg-CNN was trained on SPOT using segment la- bels directly and 10-fold cross-validation (identical folds as in our main models). Seg-CNN is not di- rectly comparable to MILNET (or HIERNET) due to differences in supervision type (segment vs. docu- ment labels) and training size (1K-2K segment la- bels vs. ∼250K document labels). However, the 3The discretization of polarities is only used for evaluation purposes and is not necessary for summary extraction, where we only need a relative ranking of segments. comparison is indicative of the utility of fine-grained sentiment predictors that do not rely on expensive segment-level annotations. 6 Results We evaluated models in two ways. We first assessed their ability to classify segment polarity in reviews using the newly created SPOT dataset and, addition- ally, the sentence corpora of Kotzias et al. (2015). Our second suite of experiments focused on opin- ion extraction: we conducted a judgment elicita- tion study to determine whether extracts produced by MILNET are useful and of higher quality com- pared to HIERNET and other baselines. We were also interested to find out whether EDUs provide a better basis for opinion extraction than sentences. 6.1 Segment Classification Table 3 summarizes our results. The first block in the table reports the performance of the majority class baseline. The second block considers mod- els that do not utilize segment-level predictions, namely HIERNET which assigns polarity scores to segments using its document-level predictions, as well as the variant of MILNET which similarly uses document-level predictions only (Equation (11)). In the third block, MILNET’s segment-level predic- tions are used. Each block further differentiates be- tween three levels of attention integration, as previ- 25 Method Yelp’13seg IMDBseg Sent EDU Sent EDU Majority 19.02† 17.03† 18.32† 21.52† D oc um en t HIERNETavg 54.21† 50.90† 46.99† 49.02† HIERNET 55.33† 51.43† 48.47† 49.70† HIERNETgt 56.64† 58.75 62.12 57.38† MILNETavg 58.43† 48.63† 53.40† 51.81† MILNET 52.73† 53.59† 48.75† 47.18† MILNETgt 59.74† 59.47 61.83† 58.24† Se gm MILNETavg 51.79† 46.77† 45.69† 38.37† MILNET 61.41 59.58 59.99† 57.71† MILNETgt 63.35 59.85 63.97 59.87 SO-CAL 56.53† 58.16† 53.21† 60.40 Seg-CNN 56.18† 59.96 58.32† 62.95† Table 3: Segment classification results (in macro- averaged F1). † indicates that the system in question is significantly different from MILNETgt (approxi- mate randomization test (Noreen, 1989), p < 0.05). ously described. The final block shows the perfor- mance of SO-CAL and the Seg-CNN classifier. When considering models that use document- level supervision, MILNET with gated, segment- specific polarities obtains the best classification per- formance across all four datasets. Interestingly, it performs comparably to Seg-CNN, the fully- supervised segment classifier, which provides addi- tional evidence that MILNET can effectively iden- tify segment polarity without the need for segment- level annotations. Our model also outperforms the strong SO-CAL baseline in all but one datasets which is remarkable given the expert knowledge and linguistic information used to develop the lat- ter. Document-level polarity predictions result in lower classification performance across the board. Differences between the standard hierarchical and multiple instance networks are less pronounced in this case, as MILNET loses the advantage of produc- ing segment-specific sentiment predictions. Models without attention perform worse in most cases. The use of gated polarities benefits all model configura- tions, indicating the method’s ability to selectively focus on segments with significant sentiment cues. We further analyzed the polarities assigned by MILNET and HIERNET to positive, negative, and Neutral Segments Non-Gtd Gated Se nt HIERNET 4.67 36.60 MILNET 39.61 44.60 Non-Gtd Gated E D U HIERNET 2.39 55.38 MILNET 52.10 56.60 Table 4: F1 scores for neutral segments (Yelp’13). Method Yelp IMDB GICF 86.3 86.0 GICFHN 92.9 86.5 GICFMN 93.2 91.0 MILNET 94.0 91.9 Table 5: Accuracy scores on the sentence classi- fication datasets intro- duced in Kotzias et al. (2015). −1 0 1 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 negative −1 0 1 HierNet neutral −1 0 1 positive −1 0 1 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 −1 0 1 polarity MILNet −1 0 1 Figure 6: Distribution of predicted polarity scores across three classes (Yelp’13 sentences). neutral segments. Figure 6 illustrates the distribu- tion of polarity scores produced by the two mod- els on the Yelp’13 dataset (sentence segmentation). In the case of negative and positive sentences, both models demonstrate appropriately skewed distribu- tions. However, the neutral class appears to be par- ticularly problematic for HIERNET, where polarity scores are scattered across a wide range of values. In contrast, MILNET is more successful at identify- ing neutral sentences, as its corresponding distribu- tion has a single mode near zero. Attention gating addresses this issue by moving the polarity scores of sentiment-neutral segments towards zero. This is illustrated in Table 4 where we observe that gated variants of both models do a better job at identify- ing neutral segments. The effect is very significant for HIERNET, while MILNET benefits slightly and remains more effective overall. Similar trends were observed in all four SPOT datasets. In order to examine the effect of training size, we trained multiple models using subsets of the original document collections. We trained on five random 26 40 45 50 55 60 65 70 m a c ro -f 1 Yelp Sentences Yelp EDUS 0 50000 100000 150000 200000 250000 training size 40 45 50 55 60 65 70 m a c ro -f 1 IMDB Sentences 0 50000 100000 150000 200000 250000 training size IMDB EDUS MILNet HierNet Seg-CNN Figure 7: Performance of HIERNETgt and MILNETgt for varying training sizes. subsets for each training size, ranging from 100 doc- uments to the full training set, and tested segment classification performance on SPOT. The results, av- eraged across trials, are presented in Figure 7. With the exception of the IMDB EDU-segmented dataset, MILNET only requires a few thousand training doc- uments to outperform the supervised Seg-CNN. HI- ERNET follows a similar curve, but is inferior to MILNET. A reason for MILNET’s inferior perfor- mance on the IMDB corpus (EDU-split) can be low- quality EDUs, due to the noisy and informal style of language used in IMDB reviews. Finally, we compared MILNET against the GICF model (Kotzias et al., 2015) on their Yelp and IMDB sentence sentiment datasets.4 Their model re- quires sentence embeddings from a pre-trained neu- ral model. We used the hierarchical CNN from their work (Denil et al., 2014) and, additionally, pre-trained HIERNET and MILNET sentence em- beddings. The results in Table 5 show that MILNET outperforms all variants of GIFC. Our models also seem to learn better sentence embeddings, as they improve GICF’s performance on both collections. 4GICF only handles binary labels, which makes it unsuitable for the full-scale comparisons in Table 3. Here, we binarize our training datasets and use same-sized sentence embeddings for all four models (R150 for Yelp, R72 for IMDB). Method Informativeness Polarity Coherence HIERNETsent 43.7 33.6 43.5 MILNETsent 45.7 36.7 44.6 Unsure 10.7 29.6 11.8 HIERNETedu 34.2† 28.0† 48.4 MILNETedu 53.3 61.1 45.0 Unsure 12.5 11.0 6.6 MILNETsent 35.7† 33.4† 70.4† MILNETedu 55.0 51.5 23.7 Unsure 9.3 15.2 5.9 LEAD 34.0 19.0† 40.3 RANDOM 22.9† 19.6† 17.8† MILNETedu 37.4 46.9 33.3 Unsure 5.7 14.6 8.6 Table 6: Human evaluation results (in percentages). † indicates that the system in question is signifi- cantly different from MILNET (sign-test, p < 0.01). 6.2 Opinion Extraction In our opinion extraction experiments, AMT work- ers (all native English speakers) were shown an original review and a set of extractive, bullet-style summaries, produced by competing systems using a 30% compression rate. Participants were asked to decide which summary was best according to three criteria: Informativeness (Which summary best cap- tures the salient points of the review?), Polarity (Which summary best highlights positive and neg- ative comments?) and Coherence (Which summary is more coherent and easier to read?). Subjects were allowed to answer “Unsure” in cases where they could not discriminate between summaries. We used all reviews from our SPOT dataset and collected three responses per document. We ran four judg- ment elicitation studies: one comparing HIERNET and MILNET when summarizing reviews segmented as sentences, a second one comparing the two mod- els with EDU segmentation, a third which compares EDU- and sentence-based summaries produced by MILNET, and a fourth where EDU-based sum- maries from MILNET were compared to a LEAD (the first N words from each document) and a RAN- DOM (random EDUs) baseline. Table 6 summarizes our results, showing the pro- portion of participants that preferred each system. The first block in the table shows a slight prefer- 27 [Rating: ????] As with any family-run hole in the wall, service can be slow. What the staff lacked in speed, they made up for in charm. The food was good, but nothing wowed me. I had the Pierogis while my friend had swedish meatballs. Both dishes were tasty, as were the sides. One thing that was disappointing was that the food was a a little cold (lukewarm). The restaurant itself is bright and clean. I will go back again when i feel like eating outside the box. E D U -b as ed Extracted via HIERNETgt (0.13) [+0.26] The food was good+ (0.10) [+0.26] but nothing wowed me.+ (0.09) [+0.26] The restaurant itself is bright and clean+ (0.13) [+0.26] Both dishes were tasty+ (0.18) [+0.26] I will go back again+ Extracted via MILNETgt (0.16) [+0.12] The food was good+ (0.12) [+0.43] The restaurant itself is bright and clean+ (0.19) [+0.15] I will go back again+ (0.09) [–0.07] but nothing wowed me.− (0.10) [–0.10] the food was a a little cold (lukewarm)− S en t- ba se d (0.12) [+0.23] Both dishes were tasty, as were the sides+ (0.18) [+0.23] The food was good, but nothing wowed me+ (0.22) [+0.23] One thing that was disappointing was that the food was a a little cold (lukewarm)+ (0.13) [+0.26] Both dishes were tasty, as were the sides+ (0.20) [+0.59] I will go back again when I feel like eating outside the box+ (0.18) [–0.12] The food was good, but nothing wowed me− (number): attention weight [number]: non-gated polarity score text+: extracted positive opinion text−: extracted negative opinion Figure 8: Example EDU- and sentence-based opinion summaries produced by HIERNETgt and MILNETgt. ence for MILNET across criteria. The second block shows significant preference for MILNET against HIERNET on informativeness and polarity, whereas HIERNET was more often preferred in terms of coherence, although the difference is not statisti- cally significant. The third block compares sentence and EDU summaries produced by MILNET. EDU summaries were perceived as significantly better in terms of informativeness and polarity, but not co- herence. This is somewhat expected as EDUs tend to produce more terse and telegraphic text and may seem unnatural due to segmentation errors. In the fourth block we observe that participants find MIL- NET more informative and better at distilling polar- ity compared to the LEAD and RANDOM (EDUs) baselines. We should point out that the LEAD sys- tem is not a strawman; it has proved hard to out- perform by more sophisticated methods (Nenkova, 2005), particularly on the newswire domain. Example EDU- and sentence-based summaries produced by gated variants of HIERNET and MIL- NET are shown in Figure 8, with attention weights and polarity scores of the extracted segments shown in round and square brackets respectively. For both granularities, HIERNET’s positive document-level prediction results in a single polarity score assigned to every segment, and further adjusted using the cor- responding attention weights. The extracted seg- ments are informative, but fail to capture the neg- ative sentiment of some segments. In contrast, MIL- NET is able to detect positive and negative snippets via individual segment polarities. Here, EDU seg- mentation produced a more concise summary with a clearer grouping of positive and negative snippets. 7 Conclusions In this work, we presented a neural network model for fine-grained sentiment analysis within the frame- work of multiple instance learning. Our model can be trained on large scale sentiment classifica- tion datasets, without the need for segment-level labels. As a departure from the commonly used vector-based composition, our model first predicts sentiment at the sentence- or EDU-level and subse- quently combines predictions up the document hier- archy. An attention-weighted polarity scoring tech- nique provides a natural way to extract sentiment- heavy opinions. Experimental results demonstrate the superior performance of our model against more conventional neural architectures. Human evalua- tion studies also show that MILNET opinion extracts are preferred by participants and are effective at cap- turing informativeness and polarity, especially when using EDU segments. In the future, we would like to focus on multi-document, aspect-based extraction (Cao et al., 2017) and ways of improving the coher- ence of our summaries by taking into account more fine-grained discourse information (Daumé III and Marcu, 2002). 28 Acknowledgments The authors gratefully acknowledge the support of the European Research Council (award num- ber 681760). We thank TACL action editor Ani Nenkova and the anonymous reviewers whose feed- back helped improve the present paper, as well as Charles Sutton, Timothy Hospedales, and members of EdinburghNLP for helpful discussions and sug- gestions. References Stuart Andrews and Thomas Hofmann. 2004. Multiple instance learning via disjunctive programming boost- ing. In Advances in Neural Information Processing Systems 16, pages 65–72. Curran Associates, Inc. Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SentiWordNet 3.0: An enhanced lexi- cal resource for sentiment analysis and opinion min- ing. In Proceedings of the 5th Conference on In- ternational Language Resources and Evaluation, vol- ume 10, pages 2200–2204, Valletta, Malta. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Represen- tations, San Diego, California, USA. Parminder Bhatia, Yangfeng Ji, and Jacob Eisenstein. 2015. Better document-level sentiment analysis from RST discourse parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2212–2218, Lisbon, Portu- gal. Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving multi-document summarization via text classification. In Proceedings of the 31st AAAI Con- ference on Artificial Intelligence, pages 3053–3058, San Francisco, California, USA. Peter Carbonetto, Gyuri Dorkó, Cordelia Schmid, Hen- drik Kück, and Nando De Freitas. 2008. Learning to recognize objects with little supervision. International Journal of Computer Vision, 77(1):219–237. Giuseppe Carenini, Rymond Ng, and Adam Pauls. 2006. Multidocument summarization of evaluative text. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 305–312, Trento, Italy. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Current and New Directions in Discourse and Dialogue, pages 85–112. Springer. Jianpeng Cheng and Mirella Lapata. 2016. Neural sum- marization by extracting sentences and words. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 484–494, Berlin, Germany. Timothee Cour, Ben Sapp, and Ben Taskar. 2011. Learn- ing from partial labels. Journal of Machine Learning Research, 12(May):1501–1536. Hal Daumé III and Daniel Marcu. 2002. A noisy-channel model for document compression. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 449–456, Philadelphia, Pennsylvania, USA. Misha Denil, Alban Demiraj, and Nando de Freitas. 2014. Extraction of salient sentences from labelled documents. Technical report, University of Oxford. Giuseppe Di Fabbrizio, Amanda Stent, and Robert Gaizauskas. 2014. A hybrid approach to multi- document summarization of opinions in reviews. In Proceedings of the 8th International Natural Lan- guage Generation Conference (INLG), pages 54–63, Philadelphia, Pennsylvania, USA. Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexan- der J. Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 193– 202, New York, NY, USA. Thomas G. Dietterich, Richard H. Lathrop, and Toms Lozano-Prez. 1997. Solving the multiple instance problem with axis-parallel rectangles. Artificial Intel- ligence, 89(1):31 – 71. Wei Vanessa Feng and Graeme Hirst. 2012. Text-level discourse parsing with rich linguistic features. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 60–68, Jeju Island, Korea. Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstrac- tive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 340–348, Beijing, China. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Ray- mond T. Ng, and Bita Nejat. 2014. Abstractive sum- marization of product reviews using discourse struc- ture. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1602–1613, Doha, Qatar. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of 29 overlapping relations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 541–550, Portland, Oregon, USA. Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 168–177, Seattle, Washington, USA. Rie Johnson and Tong Zhang. 2015a. Effective use of word order for text categorization with convolu- tional neural networks. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 103–112, Denver, Col- orado, USA. Rie Johnson and Tong Zhang. 2015b. Semi-supervised convolutional neural networks for text categorization via region embedding. In Advances in Neural Infor- mation Processing Systems 28, pages 919–927. Curran Associates, Inc. Jim Keeler and David E. Rumelhart. 1992. A self-organizing integrated segmentation and recogni- tion neural net. In Advances in Neural Informa- tion Processing Systems 4, pages 496–503. Morgan- Kaufmann. Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 1746–1751, Doha, Qatar. Dimitrios Kotzias, Misha Denil, Nando De Freitas, and Padhraic Smyth. 2015. From group to individual la- bels using deep features. In Proceedings of the 21th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 597–606, Sydney, Australia. Quoc Le and Tomas Mikolov. 2014. Distributed repre- sentations of sentences and documents. In Proceed- ings of the 31st International Conference on Machine Learning, pages 1188–1196, Beijing, China. Kevin Lerman, Sasha Blair-Goldensohn, and Ryan Mc- Donald. 2009. Sentiment summarization: Evaluating and learning user preferences. In Proceedings of the 12th Conference of the European Chapter of the ACL, pages 514–522, Athens, Greece. Junyi Jessy Li, Kapil Thadani, and Amanda Stent. 2016. The role of discourse units in near-extractive summa- rization. In Proceedings of the SIGDIAL 2016 Con- ference, The 17th Annual Meeting of the Special Inter- est Group on Discourse and Dialogue, pages 137–147, Los Angeles, California, USA. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text-Interdisciplinary Jour- nal for the Study of Discourse, 8(3):243–281. Oded Maron and Aparna Lakshmi Ratan. 1998. Multiple-instance learning for natural scene classifica- tion. In Proceedings of the 15th International Con- ference on Machine Learning, volume 98, pages 341– 349, San Francisco, California, USA. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. SummaRuNNer: A recurrent neural network based se- quence model for extractive summarization of docu- ments. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, pages 3075–3081, San Fran- cisco, California. Ani Nenkova. 2005. Automatic text summarization of newswire: Lessons learned from the document under- standing conference. In Proceedings of the 20th AAAI, pages 1436–1441, Pittsburgh, Pennsylvania, USA. Eric Noreen. 1989. Computer-intensive Methods for Testing Hypotheses: An Introduction. Wiley. Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment categoriza- tion with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Compu- tational Linguistics, pages 115–124. Association for Computational Linguistics. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using ma- chine learning techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Lan- guage Processing, pages 79–86, Pittsburgh, Pennsyl- vania, USA. Nikolaos Pappas and Andrei Popescu-Belis. 2014. Ex- plaining the stars: Weighted multiple-instance learn- ing for aspect-based sentiment analysis. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 455–466, Doha, Qatar, October. Nikolaos Pappas and Andrei Popescu-Belis. 2017. Ex- plicit document modeling through weighted multiple- instance learning. Journal of Artificial Intelligence Re- search, 58:591–626. Lizhen Qu, Georgiana Ifrim, and Gerhard Weikum. 2010. The bag-of-opinions method for review rating prediction from sparse text patterns. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 913–921, Beijing, China. Richard Socher, Jeffrey Pennington, Eric H. Huang, An- drew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, pages 151–161, Edinburgh, Scot- land, UK. 30 Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christo- pher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1631– 1642, Seattle, Washington, USA. Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, and Manfred Stede. 2011. Lexicon-based meth- ods for sentiment analysis. Computational Linguis- tics, 37(2):267–307. Oscar Täckström and Ryan McDonald. 2011. Discov- ering fine-grained sentiment with latent variable struc- tured prediction models. In Proceedings of the 39th European Conference on Information Retrieval, pages 368–374, Aberdeen, Scotland, UK. Duyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sen- timent classification. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 1422–1432, Lisbon, Portugal. Peter D Turney. 2002. Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classifi- cation of reviews. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 417–424, Pittsburgh, Pennsylvania, USA. Sida Wang and Christopher D. Manning. 2012. Base- lines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics: Short Papers-Volume 2, pages 90–94, Jeju Island, Korea. Xiu-Shen Wei, Jianxin Wu, and Zhi-Hua Zhou. 2014. Scalable multi-instance learning. In Proceedings of the IEEE International Conference on Data Mining, pages 1037–1042, Shenzhen, China. Nils Weidmann, Eibe Frank, and Bernhard Pfahringer. 2003. A two-level learning method for generalized multi-instance problems. In Proceedings of the 14th European Conference on Machine Learning, pages 468–479, Dubrovnik, Croatia. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language resources and evaluation, 39(2):165–210. Jiajun Wu, Yinan Yu, Chang Huang, and Kai Yu. 2015. Deep multiple instance learning for image classifica- tion and auto-annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 3460–3469, Boston, Massachusetts, USA. Rui Xia and Chengqing Zong. 2010. Exploring the use of word relation features for sentiment classification. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1336– 1344, Beijing, China. Xin Xu and Eibe Frank. 2004. Logistic regression and boosting for labeled bags of instances. In Proceed- ings of the Pacific-Asia Conference on Knowledge Dis- covery and Data Mining, pages 272–281. Springer- Verlag. Wei Xu, Alan Ritter, Chris Callison-Burch, William B. Dolan, and Yangfeng Ji. 2014. Extracting lexically divergent paraphrases from Twitter. Transactions of the Association for Computational Linguistics, 2:435– 448. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical atten- tion networks for document classification. In Pro- ceedings of the 2016 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics: Human Language Technologies, pages 1480– 1489, San Diego, California, USA. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. Qi Zhang, Sally A. Goldman, Wei Yu, and Jason E. Fritts. 2002. Content-based image retrieval using multiple- instance learning. In Proceedings of the 19th Inter- national Conference on Machine Learning, volume 2, pages 682–689, Sydney, Australia. Cha Zhang, John C. Platt, and Paul A. Viola. 2006. Mul- tiple instance boosting for object detection. In Ad- vances in Neural Information Processing Systems 18, pages 1417–1424. MIT Press. Zhi-Hua Zhou, Yu-Yin Sun, and Yu-Feng Li. 2009. Multi-instance learning by treating instances as non- iid samples. In Proceedings of the 26th Annual In- ternational Conference on Machine Learning, pages 1249–1256, Montréal, Quebec. 31 32