返回首页
苏宁会员
购物车 0
易付宝
手机苏宁

服务体验

店铺评分与同行业相比

用户评价:----

物流时效:----

售后服务:----

  • 服务承诺: 正品保障
  • 公司名称:
  • 所 在 地:
本店所有商品

  • [正版] 计算机体系结构 量化研究方法 英文版 原书第6版 COMPUTER ARCHITECTURE A Quan
  • 图灵奖得主Hennessy和Patterson的经典著作
    • 作者: 约翰·L.著
    • 出版社: 机械工业出版社
    送至
  • 由""直接销售和发货,并提供售后服务
  • 加入购物车 购买电子书
    服务

    看了又看

    商品预定流程:

    查看大图
    /
    ×

    苏宁商家

    商家:
    粉象优品图书专营店
    联系:
    • 商品

    • 服务

    • 物流

    搜索店内商品

    商品分类

    商品参数
    • 作者: 约翰·L.著
    • 出版社:机械工业出版社
    • ISBN:9780047269450
    • 版权提供:机械工业出版社

                                                                                                  店铺公告

    本店存在书、古旧书、收藏书、二手书等特殊商品,因受采购成本限制,可能高于定价销售,明码标价,介意者勿拍!

    1.书籍因稀缺可能导致售价高于定价,图书实际定价参见下方详情内基本信息,请买家看清楚且明确后再拍,避免价格争议!

    2.店铺无纸质均开具电子,请联系客服开具电子版

      商品基本信息
    商品名称:  计算机体系结构:量化研究方法(英文版·原书第6版)
    作者:  [美] 约翰·L. 亨尼斯(John L. Hennessy) 戴维·A. 帕特森(
    市场价:  269.00
    ISBN号:  9787111631101
    版次:  1-1
    出版日期:  1900-01
    页数:  932
    字数:  500
    出版社:  机械工业出版社
      目录
    Chapter 1 Fundamentals of Quantitative Design and Analysis
    1.1 Introduction 2
    1.2 Classes of Computers 6
    1.3 Defining Computer Architecture 11
    1.4 Trends in Technology 18
    1.5 Trends in Power and Energy in Integrated Circuits 23
    1.6 Trends in Cost 29
    1.7 Dependability 36
    1.8 Measuring, Reporting, and Summarizing Performance 39
    1.9 Quantitative Principles of Computer Design 48
    1.10 Putting It All Together: Performance, Price, and Power 55
    1.11 Fallacies and Pitfalls 58
    1.12 Concluding Remarks 64
    1.13 Historical Perspectives and References 67
    Case Studies and Exercises by Diana Franklin 67
    Chapter 2 Memory Hierarchy Design
    2.1 Introduction 78
    2.2 Memory Technology and Optimizations 84
    2.3 Ten Advanced Optimizations of Cache Performance 94
    2.4 Virtual Memory and Virtual Machines 118
    2.5 Cross-Cutting Issues: The Design of Memory Hierarchies 126
    2.6 Putting It All Together: Memory Hierarchies in the ARM Cortex-A53 and Intel Core i7 6700 129
    2.7 Fallacies and Pitfalls 142
    2.8 Concluding Remarks: Looking Ahead 146
    2.9 Historical Perspectives and References 148
    Case Studies and Exercises by Norman P. Jouppi, Rajeev
    Balasubramonian, Naveen Muralimanohar, and Sheng Li
    Chapter 3 Instruction-Level Parallelism and Its Exploitation
    3.1 Instruction-Level Parallelism: Concepts and Challenges 168
    3.2 Basic Compiler Techniques for Exposing ILP 176
    3.3 Reducing Branch Costs With Advanced Branch Prediction 182
    3.4 Overcoming Data Hazards With Dynamic Scheduling 191
    3.5 Dynamic Scheduling: Examples and the Algorithm 201
    3.6 Hardware-Based Speculation 208
    3.7 Exploiting ILP Using Multiple Issue and Static Scheduling 218
    3.8 Exploiting ILP Using Dynamic Scheduling, Multiple Issue, and Speculation 222
    3.9 Advanced Techniques for Instruction Delivery and Speculation 228
    3.10 Cross-Cutting Issues 240
    3.11 Multithreading: Exploiting Thread-Level Parallelism to Improve Uniprocessor Throughput 242
    3.12 Putting It All Together: The Intel Core i7 6700 and ARM Cortex-A53 247
    3.13 Fallacies and Pitfalls 258
    3.14 Concluding Remarks: What’s Ahead? 264
    3.15 Historical Perspective and References 266
    Case Studies and Exercises by Jason D. Bakos and Robert P. Colwell 266
    Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
    4.1 Introduction 282
    4.2 Vector Architecture 283
    4.3 SIMD Instruction Set Extensions for Multimedia 304
    4.4 Graphics Processing Units 310
    4.5 Detecting and Enhancing Loop-Level Parallelism 336
    4.6 Cross-Cutting Issues 345
    4.7 Putting It All Together: Embedded Versus Server GPUs and Tesla Versus Core i7 346
    4.8 Fallacies and Pitfalls 353
    4.9 Concluding Remarks 357
    4.10 Historical Perspective and References 357
    Case Study and Exercises by Jason D. Bakos 357
    Chapter 5 Thread-Level Parallelism
    5.1 Introduction 368
    5.2 Centralized Shared-Memory Architectures 377
    5.3 Performance of Symmetric Shared-Memory Multiprocessors 393
    5.4 Distributed Shared-Memory and Directory-Based Coherence 404
    5.5 Synchronization: The Basics 412
    5.6 Models of Memory Consistency: An Introduction 417
    5.7 Cross-Cutting Issues 422
    5.8 Putting It All Together: Multicore Processors and Their Performance 426
    5.9 Fallacies and Pitfalls 438
    5.10 The Future of Multicore Scaling 442
    5.11 Concluding Remarks 444
    5.12 Historical Perspectives and References 445
    Case Studies and Exercises by Amr Zaky and David A. Wood 446
    Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism
    6.1 Introduction 466
    6.2 Programming Models and Workloads for Warehouse-Scale Computers 471
    6.3 Computer Architecture of Warehouse-Scale Computers 477
    6.4 The Efficiency and Cost of Warehouse-Scale Computers 482
    6.5 Cloud Computing: The Return of Utility Computing 490
    6.6 Cross-Cutting Issues 501
    6.7 Putting It All Together: A Google Warehouse-Scale Computer 503
    6.8 Fallacies and Pitfalls 514
    6.9 Concluding Remarks 518
    6.10 Historical Perspectives and References 519
    Case Studies and Exercises by Parthasarathy Ranganathan 519
    Chapter 7 Domain-Specific Architectures
    7.1 Introduction 540
    7.2 Guidelines for DSAs 543
    7.3 Example Domain: Deep Neural Networks 544
    7.4 Google’s Tensor Processing Unit, an Inference Data Center Accelerator 557
    7.5 Microsoft Catapult, a Flexible Data Center Accelerator 567
    7.6 Intel Crest, a Data Center Accelerator for Training 579
    7.7 Pixel Visual Core, a Personal Mobile Device Image Processing Unit 579
    7.8 Cross-Cutting Issues 592
    7.9 Putting It All Together: CPUs Versus GPUs Versus DNN Accelerators 595
    7.10 Fallacies and Pitfalls 602
    7.11 Concluding Remarks 604
    7.12 Historical Perspectives and References 606
    Case Studies and Exercises by Cliff Young 606
    Appendix A Instruction Set Principles
    A.1 Introduction A-2
    A.2 Classifying Instruction Set Architectures A-3
    A.3 Memory Addressing A-7
    A.4 Type and Size of Operands A-13
    A
       内容简介
        【网店勿用!此为申报选题所用简介,网店请调用CIP单中的*终简介】在过去20多年的时间里,本书一直是计算机领域的教师、学生和体系结构设计人员的必读之作。两位作者Hennessy和Patterson于2017年荣获图灵奖,肯定了他们对计算机领域持久而重要的技术贡献。随着处理器和系统架构的*新发展,第6版进行了全面修订。这一版采用RISC-V指令集体系结构,这是一个现代的RISC指令集,被设计为免费且可公开采用的标准。书中还增加了一个关于领域特定体系结构的新章节,并更新了关于仓储级计算的章节,其中介绍了谷歌*新的WSC。与本书之前版本的目标一样,本书致力于揭开计算机体系结构的神秘面纱,关注那些令人兴奋的技术创新,同时强调良好的工程设计。
        
    1
    • 商品详情
    • 内容简介

    售后保障

    最近浏览

    猜你喜欢

    该商品在当前城市正在进行 促销

    注:参加抢购将不再享受其他优惠活动

    x
    您已成功将商品加入收藏夹

    查看我的收藏夹

    确定

    非常抱歉,您前期未参加预订活动,
    无法支付尾款哦!

    关闭

    抱歉,您暂无任性付资格

    此时为正式期SUPER会员专享抢购期,普通会员暂不可抢购