admin管理员组文章数量:1630578
诸神缄默不语-个人CSDN博文目录
本文是作者在学习文本摘要任务的过程中,根据学习资料总结逐步得到并整理为成文的相关内容。相关学习资料(包括论文、博文、视频等)都会以脚注等形式标明。有一些在一篇内会导致篇幅过长的内容会延伸到其他博文中撰写,但会在本文中提供超链接。
本文将主要列举里程碑式的重要文本摘要论文。
注意:除文首的表格外,本文所参考的论文,如本人已撰写对应的学习博文,则不直接引用原论文,而引用我撰写的博文。
经典文本摘要论文:文本摘要经典论文
文章目录
- 1. 按任务类型和解决方案进行总结
- 1.1 生成式摘要(重写)和抽取式摘要(句子压缩任务)
- 1.1.1 生成式摘要abstractive summarization
- 1.1.2 抽取式摘要extractive summarization
- 1.2 单文档摘要和多文档摘要
- 1.2.1 单文档摘要single-document summarization
- 1.2.2 多文档摘要multi-document summarization
- 1.3 重要研究方向
- 2. 按时代和表现力顺序对摘要模型进行总结
- 3. 集成性工具
- 4. 评估指标
- 5. 其他正文及脚注中未注明的参考资料
1. 按任务类型和解决方案进行总结
1.1 生成式摘要(重写)和抽取式摘要(句子压缩任务)
1.1.1 生成式摘要abstractive summarization
本节内容参考了以下论文的文献综述部分:1
序列生成(文本生成NLG)问题,一般使用seq2seq (S2S) 架构(encoder-decoder架构)。
sentence-fusion和重写(paraphrasing)
rephrasing and introducing new concepts/words(语出Friendly Topic Assistant for Transformer Based Abstractive Summarization)
基于结构的方法:
- 基于树的方法:tree linearization
- 基于模板的方法:
- Generating single and multi-document summaries with gistexter
- sArAmsha-A Kannada abstractive summarizer
- 基于实体的方法
- Lead and Body Phrase Method(lead指开头。总之是找一些重要短语然后做一些操作的方法,具体的其实我也没看懂,可以参考这篇博客:Towards Automatic Summarization. Part 2. Abstractive Methods. | by Sciforce | Sciforce | Medium)
- Rule Based Method
- 基于语义的方法
- 多模态语义模型
- 基于information item的方法
- 基于语义图的方法
常见问题及针对该问题提出的解决方案:
- 文本重复
- PGN(Get to the point: Summarization with pointer-generator networks.)中提出的coverage机制就是用来解决这一问题的(虽然我觉得实验上好像文本重复问题还是非常严重)
- 事实不一致问题
- 衡量原文与摘要的事实一致性:
The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey
Assessing The Factual Accuracy of Generated Text
Multi-Fact Correction in Abstractive Text Summarization
Evaluating the Factual Consistency of Abstractive Text Summarization
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries
FEQA: A Question Answering Evaluation Framework for Faithfulness Assessment in Abstractive Summarization
Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization
QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization
Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries - 直接解决事实不一致问题:
Joint Parsing and Generation for Abstractive Summarization
Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking
- 衡量原文与摘要的事实一致性:
- 文本不连贯(fluent或coherent)
- 原文太长,难以直接输入模型(Transformer模型的quadradic复杂度)
- 抽取+生成范式
分成两个阶段:第一步,通过无监督的方法或语言学知识来抽取原文中的关键文本元素(key textual elements)。第二步,用语言学规则或文本生成方法来rewrite或paraphrase抽取出来的元素,生成原文的准确摘要。2
证明这种范式比直接生成的效果更好:Bottom-Up Abstractive Summarization, Improving neural abstractive document summarization with explicit information selection modeling - 切分数据范式
- 改进模型
- 抽取+生成范式
典型的使用seq2seq+attention范式做生成式摘要的论文:
- A Neural Attention Model for Abstractive Sentence Summarization
- Abstractive Sentence Summarization with Attentive Recurrent Neural Networks
- Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
- Get To The Point: Summarization with Pointer-Generator Networks
- Abstractive Document Summarization with a Graph-Based Attentional Neural Model
- 感觉没之前几篇那么典型:Query Focused Abstractive Summarization: Incorporating Query Relevance, Multi-Document Coverage, and Summary Length Constraints into seq2seq Models
- A Discourse-Aware Attention Model for Abstractive Summarization of Long Documents
- Structure-Infused Copy Mechanisms for Abstractive Summarization
- 2019年综述:Abstractive summarization: An overview of the state of the art
1.1.2 抽取式摘要extractive summarization
本节内容参考了以下论文的文献综述部分:13
缺点:在话题切换时缺乏连贯性。
- Term Frequency-Inverse Document Frequency Method
- Cluster Based Method:聚类出各主题,文档表示方法为单词的TF-IDF得分,High frequency term represents the theme of a cluster,基于句子与簇中心的关系选择摘要句
- Text Summarization with Neural Network
- Text Summarization with Fuzzy Logic
- Graph based Method
- Latent Semantic Analysis Method: LSA
- Machine Learning approach
- Query based summarization
常见范式:做句子的二分类任务(该句是否属于摘要),将预测为“属于”的句子拼起来,组成摘要。
identify and then concatenate the most representative sentences as a summary(语出Friendly Topic Assistant for Transformer Based Abstractive Summarization)
模型分成3层来做表示学习(单词→句子→文档),使用attention等机制提高表示能力。
- 用基于图的表征来捕获显著textual units:TF-IDF similarity(Lexrank: Graph-based lexical centrality as salience in text summarization.) ;discourse relation(Textrank: Bringing order into text.);document-sentence two-layer relations(An exploration of document impact on graph-based multi-document summarization.);multi-modal (Graph-based multi-modality learning for topic-focused multidocument summarization.) 和 query information (Mutually reinforced manifold-ranking based relevance propagation model for query-focused multi-document summarization. )
- 使用GNN方法捕获文档间关系:Graph-based neural multi-document summarization.(构建discourse图并用GCN表示textual units); Hierarchical transformers for multi-document summarization.(用entity linking technique捕获句子间的全局依赖,用基于图的神经网络模型对句子进行排序)
使用深度学习方法做抽取式摘要的经典论文:
- SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents
- Extractive Summarization using Deep Learning
- Neural Extractive Summarization with Side Information
- Ranking Sentences for Extractive Summarization with Reinforcement Learning
- Fine-tune BERT for Extractive Summarization
- Extractive Summarization of Long Documents by Combining Global and Local Context
- Extractive Summarization as Text Matching
1.2 单文档摘要和多文档摘要
1.2.1 单文档摘要single-document summarization
主题论文总结4:单文档摘要(以罗列为主)(持续更新ing…)
1.2.2 多文档摘要multi-document summarization
本节内容参考了以下论文的文献综述部分:3
看了几篇MDS的论文感觉无非就是一种长文本摘要啊……有的论文就是单纯把多篇文档拼在一起,用[END]
token作间隔。(A Multi-Document Coverage Reward for RELAXed Multi-Document Summarization)
输入的多文档可能是冗余的,甚至含有自相矛盾的内容(A common theory of information fusion from multiple text sources step one: cross-document structure.)
迁移单文档摘要的模型到多文档摘要上,以回避缺乏小规模数据集的问题:
Generating wikipedia by summarizing long sequences.:定义Wikipedia生成问题,并提出WikiSum数据集。
Towards a neural network approach to abstractive multi-document summarization.
Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. :提出MultiNews数据集,在抽取过程后应用seq2seq模型生成摘要。
Leveraging graph to improve abstractive multi-document summarization.:用显式图表征建模文档间关系,结合预训练语言模型处理长文本。
1.3 重要研究方向
- 长文本摘要:主题论文总结4:长文本摘要(持续更新ing…)
- 结构化文本摘要:主题论文总结1:structured text summarization(持续更新ing…)_诸神缄默不语的博客-CSDN博客
- 对话/会议摘要:主题论文总结2:会议/对话摘要任务(持续更新ing…)_诸神缄默不语的博客-CSDN博客
- 维基百科生成:主题论文总结3:维基百科生成任务(持续更新ing…)_诸神缄默不语的博客-CSDN博客
- 科技文献(论文)摘要:主题论文总结5:科技文献(论文)摘要
2. 按时代和表现力顺序对摘要模型进行总结
- 用规则从文本中抽取重要内容(无监督抽取式摘要)
- LEAD-3:直接选择前3句作为摘要(逻辑是认为重要内容前置)
- (1958) The Automatic Creation of Literature Abstracts:用词频统计选出关键词。关键词构成簇,选择包含分值最高的簇的句子作为摘要
简化版算法的不同语言的实现:
SimpleSummariser (Classifier4J 0.6 API)
NClassifier - .NET Text Classification and Summarization Library
Summarization using NLTK - (2004) TextRank4:文本构图,用PageRank算法找出最重要的节点(逻辑和PageRank的类似)
- 时代综述
- (2007) A Survey on Automatic Text Summarization
- 其他参考博文
- TF-IDF与余弦相似性的应用(三):自动摘要 - 阮一峰的网络日志
- 传统深度学习时代
- 生成式摘要
- seq2seq模型
- RNN
- 其他使用该模型,但没有给出代码的工作:LCSTS5
- Transformers版
- 参考PyTorch官方教程:Language Modeling with nn.Transformer and TorchText — PyTorch Tutorials 1.11.0+cu102 documentation
- RNN
- PGN (Pointer-Generator):主要逻辑是以一定概率从原文复制词语,或者直接生成新词
- (2017) PGN6
- seq2seq模型
- 生成式摘要
- 前预训练模型时代
- (2019) BertSum7:抽取式摘要。用Bert对句子进行表征,用多标签分类的范式选择哪些句子最后被选中到摘要中
- (2019) UniLM8:生成式摘要。提出了一种新的预训练模型,统一NLG和NLU:结合多种mask方式,预测完形填空(wordpiece token)
- (2020) SPACES9:先抽取后生成的解耦模型。用NEZHA模型对句子进行表征,用多标签分类的范式抽取句子;用UniLM和其他trick实现生成模型
3. 集成性工具
- sumy包:sumy · PyPI
4. 评估指标
我专门整理了一篇博文,放到那里了:NLG(自然语言生成)评估指标介绍
5. 其他正文及脚注中未注明的参考资料
- 文本摘要(Text Summarization)这一经典NLP任务目前存在什么问题?有什么新的趋势? - 明明如月的回答 - 知乎
- BERT时代下的摘要提取长文总结 - 知乎
An Overview of Text Summarization Techniques ↩︎ ↩︎
转引并改译自LCSTS5
原始出处:(2015 ACL) Abstractive multi-document summarization via phrase selection and merging. ↩︎Re5:读论文 TWAG: A Topic-guided Wikipedia Abstract Generator_诸神缄默不语的博客-CSDN博客 ↩︎ ↩︎
Textrank: Bringing order into text. ↩︎
LCSTS: A Large Scale Chinese Short Text Summarization Dataset ↩︎ ↩︎
Get to the point: Summarization with pointer-generator networks. ↩︎
Fine-tune BERT for Extractive Summarization
官方源代码:nlpyang/BertSum: Code for paper Fine-tune BERT for Extractive Summarization
热心网友写的可以直接用中文数据作为输入的版本:425776024/bertsum-chinese: chinese bertsum ; bertsum 抽取式模型中文版本;给出案例数据、全代码注释;下载即可训练、预测、学习 ↩︎Unified Language Model Pre-training for Natural Language Understanding and Generation ↩︎
苏剑林的介绍博文:SPACES:“抽取-生成”式长文本摘要(法研杯总结) - 科学空间|Scientific Spaces
官方源代码:bojone/SPACES: 端到端的长本文摘要模型(法研杯2020司法摘要赛道)
热心网友写的PyTorch版复现(不完全复现):eryihaha/SPACES-Pytorch: 苏神SPACE pytorch版本复现) ↩︎
版权声明:本文标题:文本摘要(text summarization)任务:研究范式,重要模型,评估指标(持续更新ing...) 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://m.elefans.com/xitong/1729056854a1184066.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论