返回首页
苏宁会员
购物车 0
易付宝
手机苏宁

服务体验

店铺评分与同行业相比

用户评价:----

物流时效:----

售后服务:----

  • 服务承诺: 正品保障
  • 公司名称:
  • 所 在 地:
本店所有商品

  • [正版]8061310|计算机组成与设计硬件/软件接口英文版原书5版RISC-V版新版经典原版书库计算机体系结构
  • 本店商品限购一件,多拍不发货,谢谢合作
    • 作者: 戴维·A.著
    • 出版社: 机械工业出版社
    送至
  • 由""直接销售和发货,并提供售后服务
  • 加入购物车 购买电子书
    服务

    看了又看

    商品预定流程:

    查看大图
    /
    ×

    苏宁商家

    商家:
    如梦图书专营店
    联系:
    • 商品

    • 服务

    • 物流

    搜索店内商品

    商品参数
    • 作者: 戴维·A.著
    • 出版社:机械工业出版社
    • ISBN:9784320256216
    • 版权提供:机械工业出版社

                                                        店铺公告

    为保障消费者合理购买需求及公平交易机会,避免因非生活消费目的的购买货囤积商品,抬价转售等违法行为发生,店铺有权对异常订单不发货且不进行赔付。异常订单:包括但不限于相同用户ID批量下单,同一用户(指不同用户ID,存在相同/临近/虚构收货地址,或相同联系号码,收件人,同账户付款人等情形的)批量下单(一次性大于5本),以及其他非消费目的的交易订单。 温馨提示:请务必当着快递员面开箱验货,如发现破损,请立即拍照拒收,如验货有问题请及时联系在线客服处理,(如开箱验货时发现破损,所产生运费由我司承担,一经签收即为货物完好,如果您未开箱验货,一切损失就需要由买家承担,所以请买家一定要仔细验货), 关于退货运费:对于下单后且物流已发货货品在途的状态下,原则上均不接受退货申请,如顾客原因退货需要承担来回运费,如因产品质量问题(非破损问题)可在签收后,联系在线客服。

    //////// 镇店之宝 ////////
    加油,读书人
     
    书名: 8061310|计算机组成与设计硬件/软件接口英文版原书第5版RISC-V版新版经典原版书库计算机体系结构微型
    图书定价: 229元
    图书作者: [美]戴维·A. 帕特森(David A. Patterson) 约翰·L. 亨尼斯(John L. Hennessy)
    出版社: 机械工业出版社
    出版日期: 2019-07-09 0:00:00
    ISBN号: 9787111631118
    开本:16开
    页数:692
    版次:1-1
    内容简介
    本书是经典著作《计算机组成与设计》继MIPS版、ARM版之后的最新版本,这一版专注于RISC-V,是Patterson和Hennessy的又一力作。RISC-V指令集作为首个开源架构,是专为云计算、移动计算以及各类嵌入式系统等现代计算环境设计的架构。本书更加关注后PC时代发生的变革,通过实例、练习等详细介绍最新计算模式,更新的内容还包括平板电脑、云基础设施以及ARM(移动计算设备)和x86 (云计算)体系结构。
    目录
    C H A P T E R S
    1 Computer Abstractions and Technology 2
    1.1 Introduction 3
    1.2 Eight Great Ideas in Computer Architecture 11
    1.3 Below Your Program 13
    1.4 Under the Covers 16
    1.5 Technologies for Building Processors and Memory 24
    1.6 Performance 28
    1.7 The Power Wall 40
    1.8 The Sea Change: The Switch from Uniprocessors to Multiprocessors 43
    1.9 Real Stuff: Benchmarking the Intel Core i7 46
    1.10 Fallacies and Pitfalls 49
    1.11 Concluding Remarks 52
    1.12 Historical Perspective and Further Reading 54
    1.13 Exercises 54
    2 Instructions: Language of the Computer 60
    2.1 Introduction 62
    2.2 Operations of the Computer Hardware 63
    2.3 Operands of the Computer Hardware 67
    2.4 Signed and Unsigned Numbers 74
    2.5 Representing Instructions in the Computer 81
    2.6 Logical Operations 89
    2.7 Instructions for Making Decisions 92
    2.8 Supporting Procedures in Computer Hardware 98
    2.9 Communicating with People 108
    2.10 RISC-V Addressing for Wide Immediates and Addresses 113
    2.11 Parallelism and Instructions: Synchronization 121
    2.12 Translating and Starting a Program 124
    2.13 A C Sort Example to Put it All Together 133
    2.14 Arrays versus Pointers 141
    2.15 Advanced Material: Compiling C and Interpreting Java 144
    2.16 Real Stuff: MIPS Instructions 145
    2.17 Real Stuff: x86 Instructions 146
    2.18 Real Stuff: The Rest of the RISC-V Instruction Set 155
    2.19 Fallacies and Pitfalls 157
    2.20 Concluding Remarks 159
    2.21 Historical Perspective and Further Reading 162
    2.22 Exercises 162
    3 Arithmetic for Computers 172
    3.1 Introduction 174
    3.2 Addition and Subtraction 174
    3.3 Multiplication 177
    3.4 Division 183
    3.5 Floating Point 191
    3.6 Parallelism and Computer Arithmetic: Subword Parallelism 216
    3.7 Real Stuff: Streaming SIMD Extensions and Advanced Vector Extensions
    in x86 217
    3.8 Going Faster: Subword Parallelism and Matrix Multiply 218
    3.9 Fallacies and Pitfalls 222
    3.10 Concluding Remarks 225
    3.11 Historical Perspective and Further Reading 227
    3.12 Exercises 227
    4 The Processor 234
    4.1 Introduction 236
    4.2 Logic Design Conventions 240
    4.3 Building a Datapath 243
    4.4 A Simple Implementation Scheme 251
    4.5 An Overview of Pipelining 262
    4.6 Pipelined Datapath and Control 276
    4.7 Data Hazards: Forwarding versus Stalling 294
    4.8 Control Hazards 307
    4.9 Exceptions 315
    4.10 Parallelism via Instructions 321
    4.11 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Pipelines 334
    4.12 Going Faster: Instruction-Level Parallelism and Matrix Multiply 342
    4.13 Advanced Topic: An Introduction to Digital Design Using a Hardware
    Design Language to Describe and Model a Pipeline and More Pipelining
    Illustrations 345
    4.14 Fallacies and Pitfalls 345
    4.15 Concluding Remarks 346
    4.16 Historical Perspective and Further Reading 347
    4.17 Exercises 347
    5 Large and Fast: Exploiting Memory Hierarchy 364
    5.1 Introduction 366
    5.2 Memory Technologies 370
    5.3 The Basics of Caches 375
    5.4 Measuring and Improving Cache Performance 390
    5.5 Dependable Memory Hierarchy 410
    5.6 Virtual Machines 416
    5.7 Virtual Memory 419
    5.8 A Common Framework for Memory Hierarchy 443
    5.9 Using a Finite-State Machine to Control a Simple Cache 449
    5.10 Parallelism and Memory Hierarchy: Cache Coherence 454
    5.11 Parallelism and Memory Hierarchy: Redundant Arrays of Inexpensive
    Disks 458
    5.12 Advanced Material: Implementing Cache Controllers 459
    5.13 Real Stuff: The ARM Cortex-A53 and Intel Core i7 Memory
    Hierarchies 459
    5.14 Real Stuff: The Rest of the RISC-V System and Special Instructions 464
    5.15 Going Faster: Cache Blocking and Matrix Multiply 465
    5.16 Fallacies and Pitfalls 468
    5.17 Concluding Remarks 472
    5.18 Historical Perspective and Further Reading 473
    5.19 Exercises 473
    6 Parallel Processors from Client to Cloud 490
    6.1 Introduction 492
    6.2 The Difficulty of Creating Parallel Processing Programs 494
    6.3 SISD, MIMD, SIMD, SPMD, and Vector 499
    6.4 Hardware Multithreading 506
    6.5 Multicore and Other Shared Memory Multiprocessors 509
    6.6 Introduction to Graphics Processing Units 514
    6.7 Clusters, Warehouse Scale Computers, and Other Message-Passing
    Multiprocessors 521
    6.8 Introduction to Multiprocessor Network Topologies 526
    6.9 Communicating to the Outside World: Cluster Networking 529
    6.10 Multiprocessor Benchmarks and Performance Models 530
    6.11 Real Stuff: Benchmarking and Rooflines of the Intel Core i7 960 and the
    NVIDIA Tesla GPU 540
    6.12 Going Faster: Multiple Processors and Matrix Multiply 545
    6.13 Fallacies and Pitfalls 548
    6.14 Concluding Remarks 550
    6.15 Historical Perspective and Further Reading 553
    6.16 Exercises 553
    A P P E N D I X
    A The Basics of Logic Design A-2
    A.1 Introduction A-3
    A.2 Gates, Truth Tables, and Logic Equations A-4
    A.3 Combinational Logic A-9
    A.4 Using a Hardware Description Language A-20
    A.5 Constructing a Basic Arithmetic Logic Unit A-26
    A.6 Faster Addition: Carry Lookahead A-37
    A.7 Clocks A-47
    A.8 Memory Elements: Flip-Flops, Latches, and Registers A-49
    A.9 Memory Elements: SRAMs and DRAMs A-57
    A.10 Finite-State Machines A-66
    A.11 Timing Methodologies A-71
    A.12 Field Programmable Devices A-77
    A.13 Concluding Remarks A-78
    A.14 Exercises A-79
    Index I-1
    O N L I N E C O N T E N T
    Graphics and Computing GPUs B-2
    B.1 Introduction B-3
    B.2 GPU System Architectures B-7
    B.3 Programming GPUs B-12
    B.4 Multithreaded Multiprocessor Architecture B-25
    B.5 Parallel Memory System B-36
    B.6 Floating Point Arithmetic B-41
    B.7 Real Stuff: The NVIDIA GeForce 8800 B-46
    B.8 Real Stuff: Mapping Applications to GPUs B-55
    B.9 Fallacies and Pitfalls B-72
    B.10 Concluding Remarks B-76
    B.11 Historical Perspective and Further Reading B-77
    Mapping Control to Hardware C-2
    C.1 Introduction C-3
    C.2 Implementing Combinational Control Units C-4
    C.3 Implementing Finite-State Machine Control C-8
    C.4 Implementing the Next-State Function with a Sequencer C-22
    C.5 Translating a Microprogram to Hardware C-28
    C.6 Concluding Remarks C-32
    C.7 Exercises C-33
    A Survey of RISC Architectures for Desktop, Server,
    and Embedded Computers D-2
    D.1 Introduction D-3
    D.2 Addressing Modes and Instruction Formats D-5
    D.3 Instructions: the MIPS Core Subset D-9
    D.4 Instructions: Multimedia Extensions of the Desktop/Server RISCs D-16
    D.5 Instructions: Digital Signal-Processing Extensions of the Embedded
    RISCs D-19
    D.6 Instructions: Common Extensions to MIPS Core D-20
    D.7 Instructions Unique to MIPS-64 D-25
    D.8 Instructions Unique to Alpha D-27
    D.9 Instructions Unique to SPARC v9 D-29
    D.10 Instructions Unique to PowerPC D-32
    D.11 Instructions Unique to PA-RISC 2.0 D-34
    D.12 Instructions Unique to ARM D-36
    D.13 Instructions Unique to Thumb D-38
    D.14 Instructions Unique to SuperH D-39
    D.15 Instructions Unique to M32R D-40
    D.16 Instructions Unique to MIPS-16 D-40
    D.17 Concluding Remarks D-43
    Glossary G-1
    Further Reading FR-1
    The most beautiful thing we can experience is the mysterious. It is the source of all true art and science.
    Albert Einstein, What I Believe, 1930
    About This Book
    We believe that learning in computer science and engineering should reflect the current state of the field, as well as introduce the principles that are shaping computing. We also feel that readers in every specialty of computing need to appreciate the organizational paradigms that determine the capabilities, performance, energy, and, ultimately, the success of computer systems.
    Modern computer technology requires professionals of every computing specialty to understand both hardware and software. The interaction between hardware and software at a variety of levels also offers a framework for understanding the fundamentals of computing. Whether your primary interest is hardware or software, computer science or electrical engineering, the central ideas in computer organization and design are the same. Thus, our emphasis in this book is to show the relationship between hardware and software and to focus on the concepts that are the basis for current computers.
    The recent switch from uniprocessor to multicore microprocessors confirmed the soundness of this perspective, given since the first edition. While programmers could ignore the advice and rely on computer architects, compiler writers, and silicon engineers to make their programs run faster or be more energy-efficient without change, that era is over. For programs to run faster, they must become parallel. While the goal of many researchers is to make it possible for programmers to be unaware of the underlying parallel nature of the hardware they are programming, it will take many years to realize this vision. Our view is that for at least the next decade, most programmers are going to have to understand the hardware/software interface if they want programs to run efficiently on parallel computers.
    The audience for this book includes those with little experience in assembly language or logic design who need to understand basic computer organization as well as readers with backgrounds in assembly language and/or logic design who want to learn how to design a computer or understand how a system works and why it performs as it does.
    About the Other Book
    Some readers may be familiar with Computer Architecture: A Quantitative Approach, popularly known as Hennessy and Patterson. (This book in turn is often called Patterson and Hennessy.) Our motivation in writing the earlier book was to describe the principles of computer architecture using solid engineering fundamentals and quantitative cost/performance tradeoffs. We used an approach that combined examples and measurements, based on commercial systems, to create realistic design experiences. Our goal was to demonstrate that computer architecture could be learned using quantitative methodologies instead of a descriptive approach. It was intended for the serious computing professional who wanted a detailed understanding of computers.
    A majority of the readers for this book do not plan to become computer architects. The performance and energy efficiency of future software systems will be dramatically affected, however, by how well software designers understand the basic hardware techniques at work in a system. Thus, compiler writers, operating system designers, database programmers, and most other software engineers need a firm grounding in the principles presented in this book. Similarly, hardware designers must understand clearly the effects of their work on software applications.
    Thus, we knew that this book had to be much more than a subset of the material in Computer Architecture, and the material was extensively revised to match the different audience. We were so happy with the result that the subsequent editions of Computer Architecture were revised to remove most of the introductory material; hence, there is much less overlap today than with the first editions of both books.
    Why RISC-V for This Edition?
    The choice of instruction set architecture is clearly critical to the pedagogy of a computer architecture textbook. We didn’t want an instruction set that required describing unnecessary baroque features for someone’s first instruction set, no matter how popular it is. Ideally, your initial instruction set should be an exemplar, just like your first love. Surprisingly, you remember both fondly.
    Since there were so many choices at the time, for the first edition of Computer Architecture: A Quantitative Approach we invented our own RISC-style instruction set. Given the growing popularity and the simple elegance of the MIPS instruction set, we switched to it for the first edition of this book and to later editions of the other book. MIPS has served us and our readers well.
    It’s been 20 years since we made that switch, and while billions of chips that use MIPS continue to be shipped, they are typically in found embedded devices where the instruction set is nearly invisible. Thus, for a while now it’s been hard to find a real computer on which readers can download and run MIPS programs.
    The good news is that an open instruction set that adheres closely to the RISC principles has recently debuted, and it is rapidly gaining a following. RISC-V, which was developed originally at UC Berkeley, not only cleans up the quirks of the MIPS instruction set, but it offers a simple, elegant, modern take on what instruction sets should look like in 2017.
    Moreover, because it is not proprietary, there are open-source RISC-V simulators, compilers, debuggers, and so on easily available and even open-source RISC-V implementations available written in hardware description languages. In addition, there will soon be low-cost hardware platforms on which to run RISC-V programs. Readers will not only benefit from studying these RISC-V designs, they will be able to modify them and go through the implementation process in order to understand the impact of their hypothetical changes on performance, die size, and energy.
    This is an exciting opportunity for the computing industry as well as for education, and thus at the time of this writing more than 40 companies have joined the RISC-V foundation. This sponsor list includes virtually all the major players except for ARM and Intel, including AMD, Google, Hewlett Packard Enterprise, IBM, Microsoft, NVIDIA, Oracle, and Qualcomm.
    It is for these reasons that we wrote a RISC-V edition of this book, and we are switching Computer Architecture: A Quantitative Approach to RISC-V as well.
    Given that RISC-V offers both 32-bit address instructions and 64-bit address instructions with essentially the same instruction set, we could have switched instruction sets but kept the address size at 32 bits. Our publisher polled the faculty who used the book and found that 75% either preferred larger addresses or were neutral, so we increased the address space to 64 bits, which may make more sense today than 32 bits.
    The only changes for the RISC-V edition from the MIPS edition are those associated with the change in instruction sets, which primarily affects Chapter 2, Chapter 3, the virtual memory section in Chapter 5, and the short VMIPS example in Chapter 6. In Chapter 4, we switched to RISC-V instructions, changed several figures, and added a few “Elaboration” sections, but the changes were simpler than we had feared. Chapter 1 and the rest of the appendices are virtually unchanged.
    The extensive online documentation and combined with the magnitude of RISC-V make it difficult to come up with a replacement for the MIPS version of Appendix A (“Assemblers, Linkers, and the SPIM Simulator” in the MIPS Fifth Edition). Instead, Chapters 2, 3, and 5 include quick overviews of the hundreds of RISC-V instructions outside of the core RISC-V instructions that we cover in detail in the
    rest of the book.
    Note that we are not (yet) saying that we are permanently switching to RISC-V. For example, in addition to this new RISC-V edition, there are ARMv8 and MIPS versions available for sale now. One possibility is that there will be a demand for all versions for future editions of the book, or for just one. We’ll cross that bridge when we come to it. For now, we look forward to your reaction to and feedback on this effort.
    Changes for the Fifth Edition
    We had six major goals for the fifth edition of Computer Organization and Design demonstrate the importance of understanding hardware with a running example; highlight main themes across the topics using margin icons that are introduced early; update examples to reflect changeover from PC era to post-PC era; spread the material on I/O throughout the book rather than isolating it into a single chapter; update the technical content to reflect changes in the industry since the publication of the fourth edition in 2009; and put appendices and optional sections online instead of including a CD to lower costs and to make this edition viable as an electronic book.
    Before discussing the goals in detail, let’s look at the table on the next page. It shows the hardware and software paths through the material. Chapters 1, 4, 5, and 6 are found on both paths, no matter what the experience or the focus. Chapter 1 discusses the importance of energy and how it motivates the switch from single core to multicore microprocessors and introduces the eight great ideas in computer architecture. Chapter 2 is likely to be review material for the hardware-oriented, but it is essential reading for the software-oriented, especially for those readers interested in learning more about compilers and object-oriented programming languages. Chapter 3 is for readers interested in constructing a datapath or in learning more about floating-point arithmetic. Some will skip parts of Chapter 3, either because they don’t need them, or because they offer a review. However, we introduce the running example of matrix multiply in this chapter, showing how subword parallels offers a fourfold improvement, so don’t skip Sections 3.6 to 3.8. Chapter 4 explains pipelined processors. Sections 4.1, 4.5, and 4.10 give overviews, and Section 4.12 gives the next performance boost for matrix multiply for those with a software focus. Those with a hardware focus, however, will find that this chapter presents core material; they may also, depending on their background, want to read Appendix A on logic design first. The last chapter, on multicores, multiprocessors, and clusters, is mostly new content and should be read by everyone. It was significantly reorganized in this edition to make the flow of ideas more natural and to include much more depth on GPUs, warehouse-scale computers, and the hardware–software interface of network interface cards that are key to clusters.
    The first of the six goals for this fifth edition was to demonstrate the importance of understanding modern hardware to get good performance and energy efficiency with a concrete example. As mentioned above, we start with subword parallelism in Chapter 3 to improve matrix multiply by a factor of 4. We double performance in Chapter 4 by unrolling the loop to demonstrate the value of instruction-level parallelism. Chapter 5 doubles performance again by optimizing for caches using blocking. Finally, Chapter 6 demonstrates a speedup of 14 from 16 processors by using thread-level parallelism. All four optimizations in total add just 24 lines of C code to our initial matrix multiply example.
    The second goal was to help readers separate the forest from the trees by identifying eight great ideas of computer architecture early and then pointing out all the places they occur throughout the rest of the book. We use (hopefully) easyto-remember margin icons and highlight the corresponding word in the text to remind readers of these eight themes. There are nearly 100 citations in the book. No chapter has less than seven examples of great ideas, and no idea is cited less than five times. Performance via parallelism, pipelining, and prediction are the three most popular great ideas, followed closely by Moore’s Law. Chapter 4, The Processor, is the one with the most examples, which is not a surprise since it probably received the most attention from computer architects. The one great idea found in every chapter is performance via parallelism, which is a pleasant observation given the recent emphasis in parallelism in the field and in editions of this book.
    The third goal was to recognize the generation change in computing from the PC era to the post-PC era by this edition with our examples and material. Thus, Chapter 1 dives into the guts of a tablet computer rather than a PC, and Chapter 6 describes the computing infrastructure of the cloud. We also feature the ARM, which is the instruction set of choice in the personal mobile devices of the post-PC era, as well as the x86 instruction set that dominated the PC era and (so far) dominates cloud computing.
    The fourth goal was to spread the I/O material throughout the book rather than have it in its own chapter, much as we spread parallelism throughout all the chapters in the fourth edition. Hence, I/O material in this edition can be found in Sections 1.4, 4.9, 5.2, 5.5, 5.11, and 6.9. The thought is that readers (and instructors) are more likely to cover I/O if it’s not segregated to its own chapter.
    This is a fast-moving field, and, as is always the case for our new editions, an important goal is to update the technical content. The running example is the ARM Cortex A53 and the Intel Core i7, reflecting our p..
    846064154

    本店所售图书均为正版书籍

    846064154
    1
    • 商品详情
    • 内容简介

    售后保障

    最近浏览

    猜你喜欢

    该商品在当前城市正在进行 促销

    注:参加抢购将不再享受其他优惠活动

    x
    您已成功将商品加入收藏夹

    查看我的收藏夹

    确定

    非常抱歉,您前期未参加预订活动,
    无法支付尾款哦!

    关闭

    抱歉,您暂无任性付资格

    此时为正式期SUPER会员专享抢购期,普通会员暂不可抢购