id author title date pages extension mime words sentences flesch summary cache txt work_235n7dcftbgpplqreihoa3x6uy Jiangming Liu Shift-Reduce Constituent Parsing with Neural Lookahead Features 2017 14 .pdf application/pdf 8014 1031 73 Shift-Reduce Constituent Parsing with Neural Lookahead Features Transition-based constituent parsers are fast and accurate, performing incremental parsing using a sequence of state transitions in linear time. Accordingly, our model should predict the constituent hierarchy for each word rather than simple more difficult compared to simple sequence labelling, since two sequences of constituent hierarchies must be predicted for each word in the input Second, for high accuracies, global features from the full sentence are necessary since constituent hierarchies contain rich structural information. LSTM, in the same way a neural language model decoder generates output sentences for machine translation (Bahdanau et al., 2015). 1. Compared with full parsing, the constituent hierarchies associated with each word have no forced learn global features from the sentence; and the decoder layer predicts constituent hierarchies according to the encoder layer features, by using the attention mechanism (Bahdanau et al., 2015) to compute ./cache/work_235n7dcftbgpplqreihoa3x6uy.pdf ./txt/work_235n7dcftbgpplqreihoa3x6uy.txt