返回首页
苏宁会员
购物车 0
易付宝
手机苏宁

服务体验

店铺评分与同行业相比

用户评价:----

物流时效:----

售后服务:----

  • 服务承诺: 正品保障
  • 公司名称:
  • 所 在 地:

  • 正版 文本数据挖掘 宗成庆,夏睿,张家俊著 清华大学出版社 9787
  • 新华书店旗下自营,正版全新
    • 作者: 宗成庆,夏睿,张家俊著著 | 宗成庆,夏睿,张家俊著编 | 宗成庆,夏睿,张家俊著译 | 宗成庆,夏睿,张家俊著绘
    • 出版社: 清华大学出版社
    • 出版时间:2021-10
    送至
  • 由""直接销售和发货,并提供售后服务
  • 加入购物车 购买电子书
    服务

    看了又看

    商品预定流程:

    查看大图
    /
    ×

    苏宁商家

    商家:
    美阅书店
    联系:
    • 商品

    • 服务

    • 物流

    搜索店内商品

    商品分类

    商品参数
    • 作者: 宗成庆,夏睿,张家俊著著| 宗成庆,夏睿,张家俊著编| 宗成庆,夏睿,张家俊著译| 宗成庆,夏睿,张家俊著绘
    • 出版社:清华大学出版社
    • 出版时间:2021-10
    • 版次:1
    • 印次:1
    • 页数:351
    • 开本:16开
    • ISBN:9787302590293
    • 版权提供:清华大学出版社
    • 作者:宗成庆,夏睿,张家俊著
    • 著:宗成庆,夏睿,张家俊著
    • 装帧:平装
    • 印次:1
    • 定价:119.00
    • ISBN:9787302590293
    • 出版社:清华大学出版社
    • 开本:16开
    • 印刷时间:暂无
    • 语种:暂无
    • 出版时间:2021-10
    • 页数:351
    • 外部编号:11257899
    • 版次:1
    • 成品尺寸:暂无

    1 Introduction 1
    1.1 The Basic Concepts 1
    1.2 Main Tasks of Text Data Mining 3
    1.3 Existing Challenges in Text Data Mining 6
    1.4 Overview and Organization of This Book 9
    1.5 Further Reading 12
    2 Data Annotation and Preprocessing 15
    2.1 Data Acquisition 15
    2.2 Data Preprocessing 20
    2.3 Data Annotation 22
    2.4 Basic Tools of NLP 25
    2.4.1 Tokenization and POS Tagging 25
    2.4.2 Syntactic Parser 27
    2.4.3 N-gram Language Model 29
    2.5 Further Reading 30
    3 Text Representation 33
    3.1 Vector Space Model 33
    3.1.1 Basic Concepts 33
    3.1.2 Vector Space Construction 34
    3.1.3 Text Length Normalization 36
    3.1.4 Feature Engineering 37
    3.1.5 Other Text Representation Methods 39
    3.2 Distributed Representation of Words 40
    3.2.1 Neural Network Language Model 41
    3.2.2 C&W Model 45
    3.2.3 CBOW and Skip-Gram Model 47
    3.2.4 Noise Contrastive Estimation and Negative Sampling 49
    3.2.5 Distributed Representation Based on the Hybrid
    Character-Word Method 51
    3.3 Distributed Representation of Phrases 53
    3.3.1 Distributed Representation Based on the
    Bag-of-Words Model 54
    3.3.2 Distributed Representation Based on Autoencoder 54
    3.4 Distributed Representation of Sentences 58
    3.4.1 General Sentence Representation 59
    3.4.2 Task-Oriented Sentence Representation 63
    3.5 Distributed Representation of Documents 66
    3.5.1 General Distributed Representation of Documents 67
    3.5.2 Task-Oriented Distributed Representation
    of Documents 69
    3.6 Further Reading 72
    4 Text Representation with Pretraining and Fine-Tuning 75
    4.1 ELMo: Embeddings from Language Models 75
    4.1.1 Pretraining Bidirectional LSTM Language Models 76
    4.1.2 Contextualized ELMo Embeddings for
    Downstream Tasks 77
    4.2 GPT: Generative Pretraining 78
    4.2.1 Transformer 78
    4.2.2 Pretraining the Transformer Decoder 80
    4.2.3 Fine-Tuning the Transformer Decoder 81
    4.3 BERT: Bidirectional Encoder Representations
    from Transformer 82
    4.3.1 BERT: Pretraining 83
    4.3.2 BERT: Fine-Tuning 86
    4.3.3 XLNet: Generalized Autoregressive Pretraining 86
    4.3.4 UniLM 89
    4.4 Further Reading 90
    5 Text Classi?cation 93
    5.1 The Traditional Framework of Text Classi?cation 93
    5.2 Feature Selection 95
    5.2.1 Mutual Information 96
    5.2.2 Information Gain 99
    5.2.3 The Chi-Squared Test Method 100
    5.2.4 Other Methods 101
    5.3 Traditional Machine Learning Algorithms for Text
    Classi?cation 102
    5.3.1 Na?ve Bayes 103
    5.3.2 Logistic/Softmax and Maximum Entropy 105
    5.3.3 Support Vector Machine 107
    5.3.4 Ensemble Methods 110

    5.4 Deep Learning Methods ............................................. 111

    5.4.1 Multilayer Feed-Forward Neural Network ................ 111

    5.4.2 Convolutional Neural Network ............................ 113

    5.4.3 Recurrent Neural Network ................................. 115

    5.5 Evaluation of Text Classi?cation 120
    5.6 Further Reading 123
    6 Text Clustering 125
    6.1 Text Similarity Measures 125
    6.1.1 The Similarity Between Documents 125
    6.1.2 The Similarity Between Clusters 128
    6.2 Text Clustering Algorithms 129
    6.2.1 K-Means Clustering 129
    6.2.2 Single-Pass Clustering 133
    6.2.3 Hierarchical Clustering 136
    6.2.4 Density-Based Clustering 138
    6.3 Evaluation of Clustering 141
    6.3.1 External Criteria 141
    6.3.2 Internal Criteria 142
    6.4 Further Reading 143
    7 Topic Model 145
    7.1 The History of Topic Modeling. 145
    7.2 Latent Semantic Analysis 146
    7.2.1 Singular Value Decomposition of the
    Term-by-Document Matrix 147
    7.2.2 Conceptual Representation and Similarity
    Computation 148
    7.3 Probabilistic Latent Semantic Analysis 150

    7.3.1 Model Hypothesis .......................................... 150

    7.3.2 Parameter Learning ......................................... 151

    7.4 Latent Dirichlet Allocation .......................................... 153

    7.4.1 Model Hypothesis .......................................... 153

    7.4.2 Joint Probability ............................................ 155

    7.4.3 Inference in LDA ........................................... 158

    7.4.4 Inference for New Documents ............................. 160

    7.5 Further Reading 161
    8 Sentiment Analysis and Opinion Mining 163
    8.1 History of Sentiment Analysis and Opinion Mining 163
    8.2 Categorization of Sentiment Analysis Tasks 164
    8.2.1 Categorization According to Task Output 164
    8.2.2 According to Analysis Granularity 165

    8.3 Methods for Document/Sentence-Level Sentiment Analysis 168
    8.3.1 Lexicon- and Rule-Based Methods 169
    8.3.2 Traditional Machine Learning Methods 170
    8.3.3 Deep Learning Methods 174
    8.4 Word-Level Sentiment Analysis and Sentiment Lexicon
    Construction 178
    8.4.1 Knowledgebase-Based Methods 178
    8.4.2 Corpus-Based Methods 179
    8.4.3 Evaluation of Sentiment Lexicons 182
    8.5 Aspect-Level Sentiment Analysis 183

    8.5.1 Aspect Term Extraction .................................... 183

    8.5.2 Aspect-Level Sentiment Classi?cation .................... 186

    8.5.3 Generative Modeling of Topics and Sentiments .......... 191

    8.6 Special Issues in Sentiment Analysis................................ 193

    8.6.1 Sentiment Polarity Shift .................................... 193

    8.6.2 Domain Adaptation ......................................... 195

    8.7 Further Reading ...................................................... 198

    9 Topic Detection and Tracking ............................................. 201

    9.1 History of Topic Detection and Tracking ........................... 201

    9.2 Terminology and Task De?nition.................................... 202

    9.2.1 Terminology ................................................ 202

    9.2.2 Task ......................................................... 203

    9.3 Story/Topic Representation and Similarity Computation .......... 206

    9.4 Topic Detection....................................................... 209

    9.4.1 Online Topic Detection ..................................... 209

    9.4.2 Retrospective Topic Detection ............................. 211

    9.5 Topic Tracking........................................................ 212

    9.6 Evaluation ............................................................ 213

    9.7 Social Media Topic Detection and Tracking ........................ 215

    9.7.1 Social Media Topic Detection.............................. 216

    9.7.2 Social Media Topic Tracking .............................. 217

    9.8 Bursty Topic Detection............................................... 217

    9.8.1 Burst State Detection ....................................... 218

    9.8.2 Document-Pivot Methods .................................. 221

    9.8.3 Feature-Pivot Methods ..................................... 222

    9.9 Further Reading ...................................................... 224

    10 Information Extraction 227
    10.1 Concepts and History 227
    10.2 Named Entity Recognition 229
    10.2.1 Rule-based Named Entity Recognition 230
    10.2.2 Supervised Named Entity Recognition Method 231
    10.2.3 Semisupervised Named Entity Recognition Method 239
    10.2.4 Evaluation of Named Entity Recognition Methods 241


    10.3 Entity Disambiguation ............................................... 242

    10.3.1 Clustering-Based Entity Disambiguation Method ........ 243

    10.3.2 Linking-Based Entity Disambiguation .................... 248

    10.3.3 Evaluation of Entity Disambiguation .. . . . ................. 254

    10.4 Relation Extraction ................................................... 256

    10.4.1 Relation Classi?cation Using Discrete Features .......... 258

    10.4.2 Relation Classi?cation Using Distributed Features ....... 265

    10.4.3 Relation Classi?cation Based on Distant Supervision .. . . 268

    10.4.4 Evaluation of Relation Classi?cation . ..................... 269

    10.5 Event Extraction 270

    10.5.1 Event Description Template................................ 270

    10.5.2 Event Extraction Method ................................... 272

    10.5.3 Evaluation of Event Extraction ............................ 281

    10.6 Further Reading ...................................................... 281

    11 Automatic Text Summarization 285
    11.1 Main Tasks in Text Summarization 285
    11.2 Extraction-Based Summarization 287
    11.2.1 Sentence Importance Estimation 287
    11.2.2 Constraint-Based Summarization Algorithms 298
    11.3 Compression-Based Automatic Summarization 299
    11.3.1 Sentence Compression Method 300
    11.3.2 Automatic Summarization Based on Sentence
    Compression 305
    11.4 Abstractive Automatic Summarization 307
    11.4.1 Abstractive Summarization Based on
    Information Fusion 307
    11.4.2 Abstractive Summarization Based on the

    Encoder-Decoder Framework .............................. 313

    11.5 Query-Based Automatic Summarization ............................ 316

    11.5.1 Relevance Calculation Based on the Language Model . . . 317

    11.5.2 Relevance Calculation Based on Keyword Co-occurrence .............................

    Chengqing Zong is professor at the National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences. He serves as chairs for many prestigious conferences such as ACL-IJCNLP, IJCAI, IJCAI-ECAI, AAAI and COLING, etc., and served as associate editors for prestigious journals such as TALLIP, Machine Translation, etc. He is the President of Asian Federation on Natural Language Processing and a member of International Committee on Computational Linguistics.

    《文本数据挖掘(英文版)》面向文本挖掘任务的实际需求,通过实例从原理上对相关技术的理论方法和实现算法进行阐述,写作风格力求言简意赅,深入浅出,而不过多地涉及实现细节,尽量使读者能够在充分理解基本原理的基础上掌握应用系统的实现方法。
    It is suitable for students, researchers and practitioners interested in text data mining both as a learning text and as a reference book. Professors can readily use it for classes on text data mining or NLP.

    《Text data mining》 offers thorough and detailed introduction to the fundamental theories and methods of text data mining, ranging from pre-processing (for both Chinese and English texts), text representation, feature selection, to text classification and text clustering. Also it presents predominant applications of text data mining, for example, topic model, sentiment analysis and opinion mining, topic detection and tracking, information extraction, and text automatic summarization, etc.

    《文本数据挖掘(英文版)》面向文本挖掘任务的实际需求,通过实例从原理上对相关技术的理论方法和实现算法进行阐述,写作风格力求言简意赅,深入浅出,而不过多地涉及实现细节,尽量使读者能够在充分理解基本原理的基础上掌握应用系统的实现方法。

    售后保障

    最近浏览

    猜你喜欢

    该商品在当前城市正在进行 促销

    注:参加抢购将不再享受其他优惠活动

    x
    您已成功将商品加入收藏夹

    查看我的收藏夹

    确定

    非常抱歉,您前期未参加预订活动,
    无法支付尾款哦!

    关闭

    抱歉,您暂无任性付资格

    此时为正式期SUPER会员专享抢购期,普通会员暂不可抢购