3.1.1Convolutional Neural Networks
3.1.2Recurrent Neural Networks
3.2The Encoder-Decoder Framework
3.2.1Learning Input Representations with Bidirectional RNNs
3.2.2Generating Text Using Recurrent Neural Networks
3.2.3Training and Decoding with Sequential Generators
3.3Differences with Pre-Neural Text-Production Approaches
5Building Better Input Representations
5.1Pitfalls of Modelling Input as a Sequence of Tokens
5.1.1Modelling Long Text as a Sequence of Tokens
5.1.2Modelling Graphs or Trees as a Sequence of Tokens
5.1.3Limitations of Sequential Representation Learning
5.2.1Modelling Documents with Hierarchical LSTMs
5.2.2Modelling Document with Ensemble Encoders
5.2.3Modelling Document With Convolutional Sentence Encoders
5.3.1Graph-to-Sequence Model for AMR Generation
5.3.2Graph-Based Triple Encoder for RDF Generation
5.3.3Graph Convolutional Networks as Graph Encoders
6Modelling Task-Specific Communication Goals
6.1Task-Specific Knowledge for Content Selection
6.1.1Selective Encoding to Capture Salient Information
6.1.2Bottom-Up Copy Attention for Content Selection
6.1.3Graph-Based Attention for Salient Sentence Detection
6.1.4Multi-Instance and Multi-Task Learning for Content Selection
6.2Optimising Task-Specific Evaluation Metric with Reinforcement Learning
6.2.1The Pitfalls of Cross-Entropy Loss
6.2.2Text Production as a Reinforcement Learning Problem
6.2.3Reinforcement Learning Applications
6.3User Modelling in Neural Conversational Model
PART IIIData Sets and Conclusion
7.1Data Sets for Data-to-Text Generation
7.1.1Generating Biographies from Structured Data
7.1.2Generating Entity Descriptions from Sets of RDF Triples
7.1.3Generating Summaries of Sports Games from Box-Score Data
7.2Data Sets for Meaning Representations to Text Generation
7.2.1Generating from Abstract Meaning Representations
7.2.2Generating Sentences from Dependency Trees
7.2.3Generating from Dialogue Moves
7.3Data Sets for Text-to-Text Generation