Reduced instruction set computer - Wikipedia Reduced instruction set computer From Wikipedia, the free encyclopedia   (Redirected from Reduced instruction set computing) Jump to navigation Jump to search Processor executing one instruction in minimal clock cycles "RISC" redirects here. For other uses, see RISC (disambiguation). This article may be too technical for most readers to understand. Please help improve it to make it understandable to non-experts, without removing the technical details. (October 2016) (Learn how and when to remove this template message) A Sun UltraSPARC, a RISC microprocessor A reduced instruction set computer, or RISC (/rɪsk/), is a computer with a small, highly optimized set of instructions, rather than the more specialized set often found in other types of architecture, such as in a complex instruction set computer (CISC).[1] The main distinguishing feature of RISC architecture is that the instruction set is optimized with a large number of registers and a highly regular instruction pipeline, allowing a low number of clock cycles per instruction (CPI). Another common RISC feature is the load/store architecture,[2] in which memory is accessed through specific instructions rather than as a part of most instructions in the set. Although a number of computers from the 1960s and 1970s have been identified as forerunners of RISCs, the modern concept dates to the 1980s. In particular, two projects at Stanford University and the University of California, Berkeley are most associated with the popularization of this concept. Stanford's MIPS would go on to be commercialized as the successful MIPS architecture, while Berkeley's RISC gave its name to the entire concept and was commercialized as the SPARC. Another success from this era was IBM's effort that eventually led to the IBM POWER instruction set architecture, PowerPC, and Power ISA. As these projects matured, a variety of similar designs flourished in the late 1980s and especially the early 1990s, representing a major force in the Unix workstation market as well as for embedded processors in laser printers, routers and similar products. The many varieties of RISC designs include ARC, Alpha, Am29000, ARM, Atmel AVR, Blackfin, i860, i960, M88000, MIPS, PA-RISC, Power ISA (including PowerPC), RISC-V, SuperH, and SPARC. The use of ARM architecture processors in smartphones and tablet computers such as the iPad and Android devices provided a wide user base for RISC-based systems. RISC processors are also used in supercomputers, such as Fugaku, which, as of June 2020[update], is the world's fastest supercomputer.[3] Contents 1 History and development 2 Characteristics and design philosophy 2.1 Instruction set philosophy 2.2 Instruction format 2.3 Hardware utilization 3 Comparison to other architectures 4 Use of RISC architectures 4.1 Low-end and mobile systems 4.2 Workstations, servers, and supercomputers 5 See also 6 References 7 External links History and development[edit] Alan Turing's 1946 Automatic Computing Engine (ACE) design had many of the characteristics of a RISC architecture.[4] A number of systems, going back to the 1960s, have been credited as the first RISC architecture, partly based on their use of load/store approach.[5] The term RISC was coined by David Patterson of the Berkeley RISC project, although somewhat similar concepts had appeared before.[6] The CDC 6600 designed by Seymour Cray in 1964 used a load/store architecture with only two addressing modes (register+register, and register+immediate constant) and 74 operation codes, with the basic clock cycle being 10 times faster than the memory access time.[7] Partly due to the optimized load/store architecture of the CDC 6600, Jack Dongarra says that it can be considered a forerunner of modern RISC systems, although a number of other technical barriers needed to be overcome for the development of a modern RISC system.[8] An IBM PowerPC 601 RISC microprocessor Michael J. Flynn views the first RISC system as the IBM 801 design, begun in 1975 by John Cocke and completed in 1980.[2] The 801 was eventually produced in a single-chip form as the IBM ROMP in 1981, which stood for 'Research OPD [Office Products Division] Micro Processor'.[9] This CPU was designed for "mini" tasks, and was also used in the IBM RT PC in 1986, which turned out to be a commercial failure.[10] But the 801 inspired several research projects, including new ones at IBM that would eventually lead to the IBM POWER instruction set architecture.[11][12] In the mid-1970s, researchers (particularly John Cocke at IBM and similar projects elsewhere) demonstrated that the majority of combinations of these orthogonal addressing modes and instructions were not used by most programs generated by compilers available at the time. It proved difficult in many cases to write a compiler with more than limited ability to take advantage of the features provided by conventional CPUs. It was also discovered that, on microcoded implementations of certain architectures, complex operations tended to be slower than a sequence of simpler operations doing the same thing. This was in part an effect of the fact that many designs were rushed, with little time to optimize or tune every instruction; only those used most often were optimized, and a sequence of those instructions could be faster than a less-tuned instruction performing an equivalent operation as that sequence. One infamous example was the VAX's INDEX instruction.[13] As mentioned elsewhere, core memory had long since been slower than many CPU designs. The advent of semiconductor memory reduced this difference, but it was still apparent that more registers (and later caches) would allow higher CPU operating frequencies. Additional registers would require sizeable chip or board areas which, at the time (1975), could be made available if the complexity of the CPU logic was reduced. The most public RISC designs, however, were the results of university research programs run with funding from the DARPA VLSI Program. The VLSI Program, practically unknown today, led to a huge number of advances in chip design, fabrication, and even computer graphics. The Berkeley RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin.[6][13][14] Berkeley RISC was based on gaining performance through the use of pipelining and an aggressive use of a technique known as register windowing.[13][14] In a traditional CPU, one has a small number of registers, and a program can use any register at any time. In a CPU with register windows, there are a huge number of registers, e.g., 128, but programs can only use a small number of them, e.g., eight, at any one time. A program that limits itself to eight registers per procedure can make very fast procedure calls: The call simply moves the window "down" by eight, to the set of eight registers used by that procedure, and the return moves the window back.[15] The Berkeley RISC project delivered the RISC-I processor in 1982. Consisting of only 44,420 transistors (compared with averages of about 100,000 in newer CISC designs of the era) RISC-I had only 32 instructions, and yet completely outperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I.[14] The MIPS project grew out of a graduate course by John L. Hennessy at Stanford University in 1981, resulted in a functioning system in 1983, and could run simple programs by 1984.[16] The MIPS approach emphasized an aggressive clock cycle and the use of the pipeline, making sure it could be run as "full" as possible.[16] The MIPS system was followed by the MIPS-X and in 1984 Hennessy and his colleagues formed MIPS Computer Systems.[16][17] The commercial venture resulted in a new architecture that was also called MIPS and the R2000 microprocessor in 1985.[17] RISC-V prototype chip (2013). In the early 1980s, significant uncertainties surrounded the RISC concept, and it was uncertain if it could have a commercial future, but by the mid-1980s the concepts had matured enough to be seen as commercially viable.[10][16] In 1986 Hewlett Packard started using an early implementation of their PA-RISC in some of their computers.[10] In the meantime, the Berkeley RISC effort had become so well known that it eventually became the name for the entire concept and in 1987 Sun Microsystems began shipping systems with the SPARC processor, directly based on the Berkeley RISC-II system.[10][18] The US government Committee on Innovations in Computing and Communications credits the acceptance of the viability of the RISC concept to the success of the SPARC system.[10] The success of SPARC renewed interest within IBM, which released new RISC systems by 1990 and by 1995 RISC processors were the foundation of a $15 billion server industry.[10] Since 2010 a new open source instruction set architecture (ISA), RISC-V, has been under development at the University of California, Berkeley, for research purposes and as a free alternative to proprietary ISAs. As of 2014, version 2 of the user space ISA is fixed.[19] The ISA is designed to be extensible from a barebones core sufficient for a small embedded processor to supercomputer and cloud computing use with standard and chip designer defined extensions and coprocessors. It has been tested in silicon design with the ROCKET SoC which is also available as an open-source processor generator in the CHISEL language. Characteristics and design philosophy[edit] This section needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2012) (Learn how and when to remove this template message) Further information: Processor design Instruction set philosophy[edit] A common misunderstanding of the phrase "reduced instruction set computer" is the mistaken idea that instructions are simply eliminated, resulting in a smaller set of instructions.[20] In fact, over the years, RISC instruction sets have grown in size, and today many of them have a larger set of instructions than many CISC CPUs.[21][22] Some RISC processors such as the PowerPC have instruction sets as large as the CISC IBM System/370, for example; conversely, the DEC PDP-8—clearly a CISC CPU because many of its instructions involve multiple memory accesses—has only 8 basic instructions and a few extended instructions.[23] The term "reduced" in that phrase was intended to describe the fact that the amount of work any single instruction accomplishes is reduced—at most a single data memory cycle—compared to the "complex instructions" of CISC CPUs that may require dozens of data memory cycles in order to execute a single instruction.[24] In particular, RISC processors typically have separate instructions for I/O and data processing.[25] The term load/store architecture is sometimes preferred. Instruction format[edit] Most RISC architectures have fixed-length instructions (commonly 32 bits) and a simple encoding, which simplifies fetch, decode, and issue logic considerably. One drawback of 32-bit instructions is reduced code density, which is more adverse a characteristic in embedded computing than it is in the workstation and server markets RISC architectures were originally designed to serve. To address this problem, several architectures, such as ARM, Power ISA, MIPS, RISC-V, and the Adapteva Epiphany, have an optional short, feature-reduced instruction format or instruction compression feature. The SH5 also follows this pattern, albeit having evolved in the opposite direction, having added longer media instructions to an original 16-bit encoding. Hardware utilization[edit] For any given level of general performance, a RISC chip will typically have far fewer transistors dedicated to the core logic which originally allowed designers to increase the size of the register set and increase internal parallelism. Other features of RISC architectures include: Processor average throughput nears 1 instruction per cycle Uniform instruction format, using single word with the opcode in the same bit positions for simpler decoding All general purpose registers can be used equally as source/destination in all instructions, simplifying compiler design (floating point registers are often kept separate) Simple addressing modes with complex addressing performed by instruction sequences Few data types in hardware (no byte string or BCD, for example) RISC designs are also more likely to feature a Harvard memory model, where the instruction stream and the data stream are conceptually separated; this means that modifying the memory where code is held might not have any effect on the instructions executed by the processor (because the CPU has a separate instruction and data cache), at least until a special synchronization instruction is issued. On the upside, this allows both caches to be accessed simultaneously, which can often improve performance. Many early RISC designs also shared the characteristic of having a branch delay slot, an instruction space immediately following a jump or branch. The instruction in this space is executed, whether or not the branch is taken (in other words the effect of the branch is delayed). This instruction keeps the ALU of the CPU busy for the extra time normally needed to perform a branch. Nowadays the branch delay slot is considered an unfortunate side effect of a particular strategy for implementing some RISC designs, and modern RISC designs generally do away with it (such as PowerPC and more recent versions of SPARC and MIPS).[citation needed] Some aspects attributed to the first RISC-labeled designs around 1975 include the observations that the memory-restricted compilers of the time were often unable to take advantage of features intended to facilitate manual assembly coding, and that complex addressing modes take many cycles to perform due to the required additional memory accesses. It was argued that such functions would be better performed by sequences of simpler instructions if this could yield implementations small enough to leave room for many registers, reducing the number of slow memory accesses. In these simple designs, most instructions are of uniform length and similar structure, arithmetic operations are restricted to CPU registers and only separate load and store instructions access memory. These properties enable a better balancing of pipeline stages than before, making RISC pipelines significantly more efficient and allowing higher clock frequencies. Yet another impetus of both RISC and other designs came from practical measurements on real-world programs. Andrew Tanenbaum summed up many of these, demonstrating that processors often had oversized immediates. For instance, he showed that 98% of all the constants in a program would fit in 13 bits, yet many CPU designs dedicated 16 or 32 bits to store them. This suggests that, to reduce the number of memory accesses, a fixed length machine could store constants in unused bits of the instruction word itself, so that they would be immediately ready when the CPU needs them (much like immediate addressing in a conventional design). This required small opcodes in order to leave room for a reasonably sized constant in a 32-bit instruction word. Since many real-world programs spend most of their time executing simple operations, some researchers decided to focus on making those operations as fast as possible. The clock rate of a CPU is limited by the time it takes to execute the slowest sub-operation of any instruction; decreasing that cycle-time often accelerates the execution of other instructions.[26] The focus on "reduced instructions" led to the resulting machine being called a "reduced instruction set computer" (RISC). The goal was to make instructions so simple that they could easily be pipelined, in order to achieve a single clock throughput at high frequencies. Later, it was noted that one of the most significant characteristics of RISC processors was that external memory was only accessible by a load or store instruction. All other instructions were limited to internal registers. This simplified many aspects of processor design: allowing instructions to be fixed-length, simplifying pipelines, and isolating the logic for dealing with the delay in completing a memory access (cache miss, etc.) to only two instructions. This led to RISC designs being referred to as load/store architectures.[27] Comparison to other architectures[edit] Some CPUs have been specifically designed to have a very small set of instructions – but these designs are very different from classic RISC designs, so they have been given other names such as minimal instruction set computer (MISC) or transport triggered architecture (TTA). RISC architectures have traditionally had few successes in the desktop PC and commodity server markets, where the x86-based platforms remain the dominant processor architecture. However, this may change, as ARM-based processors are being developed for higher performance systems.[28] Manufacturers including Cavium, AMD, and Qualcomm have released server processors based on the ARM architecture.[29][30] ARM is further partnered with Cray in 2017 to produce an ARM-based supercomputer.[31] On the desktop, Microsoft announced that it planned to support the PC version of Windows 10 on Qualcomm Snapdragon-based devices in 2017 as part of its partnership with Qualcomm. These devices will support Windows applications compiled for 32-bit x86 via an x86 processor emulator that translates 32-bit x86 code to ARM64 code.[32][33] Apple announced they will transition their Mac desktop and laptop computers from Intel processors to internally developed ARM64-based SoCs called Apple Silicon. Macs with Apple Silicon will be able to run x86-64 binaries with Rosetta 2, an x86-64 to ARM64 translator.[34] Outside of the desktop arena, however, the ARM RISC architecture is in widespread use in smartphones, tablets and many forms of embedded device. It is also the case that since the Pentium Pro (P6), Intel x86 processors have internally translated x86 CISC instructions into one or more RISC-like micro-operations, scheduling and executing the micro-operations separately.[35] While early RISC designs differed significantly from contemporary CISC designs, by 2000 the highest-performing CPUs in the RISC line were almost indistinguishable from the highest-performing CPUs in the CISC line.[36][37][38] Use of RISC architectures[edit] RISC architectures are now used across a range of platforms, from smartphones and tablet computers to some of the world's fastest supercomputers such as Summit, the fastest on the TOP500 list as of November 2018[update].[39] Low-end and mobile systems[edit] By the beginning of the 21st century, the majority of low-end and mobile systems relied on RISC architectures.[40] Examples include: The ARM architecture dominates the market for low power and low cost embedded systems (typically 200–1800 MHz in 2014). It is used in a number of systems such as most Android-based systems, the Apple iPhone and iPad, Microsoft Windows Phone (former Windows Mobile), RIM devices, Nintendo Game Boy Advance, DS, 3DS and Switch, Raspberry Pi, etc. IBM's PowerPC was used in the GameCube, Wii, PlayStation 3, Xbox 360 and Wii U gaming consoles. The MIPS line (at one point used in many SGI computers) was used in the PlayStation, PlayStation 2, Nintendo 64, PlayStation Portable game consoles, and residential gateways like Linksys WRT54G series. Hitachi's SuperH, originally in wide use in the Sega Super 32X, Saturn and Dreamcast, now developed and sold by Renesas as the SH4. Atmel AVR used in a variety of products ranging from Xbox handheld controllers and the Arduino open-source microcontroller platform to BMW cars. RISC-V, the open-source fifth Berkeley RISC ISA, with 32- or 64-bit address spaces, a small core integer instruction set, and an experimental "Compressed" ISA for code density and designed for standard and special purpose extensions. Workstations, servers, and supercomputers[edit] Apple-designed processors based on the ARM architecture will be used by Apple's lineup of desktop and laptop computers following its transition from Intel processors.[41] MIPS, by Silicon Graphics (ceased making MIPS-based systems in 2006). SPARC, by Oracle (previously Sun Microsystems), and Fujitsu. IBM's IBM POWER instruction set architecture, PowerPC, and Power ISA, most famously known for its use on many Macintosh computer models prior to the completion of its transition to Intel processors,[42] and in many of IBM's supercomputers, mid-range servers and workstations. Hewlett-Packard's PA-RISC, also known as HP-PA (discontinued at the end of 2008). Alpha, used in single-board computers, workstations, servers and supercomputers from Digital Equipment Corporation, then Compaq and finally HP (discontinued as of 2007). RISC-V, the open source fifth Berkeley RISC ISA, with 64- or 128-bit address spaces, and the integer core extended with floating point, atomics and vector processing, and designed to be extended with instructions for networking, I/O, and data processing. A 64-bit superscalar design, "Rocket", is available for download. See also[edit] Addressing mode Classic RISC pipeline Complex instruction set computer Computer architecture Instruction set architecture Microprocessor Minimal instruction set computer References[edit] ^ Berezinski, John. "RISC — Reduced instruction set computer". Department of Computer Science, Northern Illinois University. Archived from the original on 28 February 2017. ^ a b Flynn, Michael J. (1995). Computer architecture: pipelined and parallel processor design. pp. 54–56. ISBN 0867202041. ^ "Japan's Fugaku gains title as world's fastest supercomputer". RIKEN. Retrieved 24 June 2020. ^ Doran, Robert (2005), "Computer architecture and the ACE computers", in Copeland, Jack (ed.), Alan Turing's Electronic Brain: The Struggle to Build the ACE, the World's Fastest Computer, Oxford: Oxford University Press, ISBN 978-0199609154 ^ Fisher, Joseph A.; Faraboschi, Paolo; Young, Cliff (2005). Embedded Computing: A VLIW Approach to Architecture, Compilers and Tools. p. 55. ISBN 1558607668. ^ a b Reilly, Edwin D. (2003). Milestones in computer science and information technology. pp. 50. ISBN 1-57356-521-0. ^ Grishman, Ralph (1974). Assembly Language Programming for the Control Data 6000 Series and the Cyber 70 Series. Algorithmics Press. p. 12. OCLC 425963232. ^ Dongarra, Jack J.; et al. (1987). Numerical Linear Algebra on High-Performance Computers. pp. 6. ISBN 0-89871-428-1. ^ Šilc, Jurij; Robič, Borut; Ungerer, Theo (1999). Processor architecture: from dataflow to superscalar and beyond. pp. 33. ISBN 3-540-64798-8. ^ a b c d e f Funding a Revolution: Government Support for Computing Research by Committee on Innovations in Computing and Communications 1999 ISBN 0-309-06278-0 page 239 ^ Nurmi, Jari (2007). Processor design: system-on-chip computing for ASICs and FPGAs. pp. 40–43. ISBN 978-1-4020-5529-4. ^ Hill, Mark Donald; Jouppi, Norman Paul; Sohi, Gurindar (1999). Readings in computer architecture. pp. 252–4. ISBN 1-55860-539-8. ^ a b c Patterson, D. A.; Ditzel, D. R. (1980). "The case for the reduced instruction set computer". ACM SIGARCH Computer Architecture News. 8 (6): 25–33. CiteSeerX 10.1.1.68.9623. doi:10.1145/641914.641917. S2CID 12034303. ^ a b c Patterson, David A.; Sequin, Carlo H. (1981). RISC I: A Reduced Instruction Set VLSI Computer. 8th annual symposium on Computer Architecture. Minneapolis, MN, USA. pp. 443–457. doi:10.1145/285930.285981. As PDF ^ Sequin, Carlo; Patterson, David (July 1982). Design and Implementation of RISC I (PDF). Advanced Course on VLSI Architecture. University of Bristol. CSD-82-106. ^ a b c d Chow, Paul (1989). The MIPS-X RISC microprocessor. pp. xix–xx. ISBN 0-7923-9045-8. ^ a b Nurmi 2007, pp. 52–53 ^ Tucker, Allen B. (2004). Computer science handbook. pp. 100–6. ISBN 1-58488-360-X. ^ Waterman, Andrew; Lee, Yunsup; Patterson, David A.; Asanovi, Krste. "The RISC-V Instruction Set Manual, Volume I: Base User-Level ISA version 2 (Technical Report EECS-2014-54)". University of California, Berkeley. Retrieved 26 December 2014. ^ Esponda, Margarita; Rojas, Ra'ul (September 1991). "Section 2: The confusion around the RISC concept". The RISC Concept — A Survey of Implementations. Freie Universitat Berlin. B-91-12. ^ [Stokes, Jon "Hannibal". "RISC vs. CISC: the Post-RISC Era". Arstechnica. ^ Borrett, Lloyd (June 1991). "RISC versus CISC". Australian Personal Computer. ^ Jones, Douglas W. "Doug Jones's DEC PDP-8 FAQs". PDP-8 Collection, The University Of Iowa Department of Computer Science. ^ Dandamudi, Sivarama P. (2005). "Ch. 3: RISC Principles". Guide to RISC Processors for Programmers and Engineers. Springer. pp. 39–44. doi:10.1007/0-387-27446-4_3. ISBN 978-0-387-21017-9. the main goal was not to reduce the number of instructions, but the complexity ^ Ambriz, Kelly (25 May 1999). "I/O processor for optimal data transfer". EE Time. AspenCore, Inc. The 32-bit RISC processors can be broken down into microcontrollers, host processors, embedded processors and I/O processors. ^ "Microprocessors From the Programmer's Perspective" by Andrew Schulman 1990 ^ Dowd, Kevin; Loukides, Michael K. (1993). High Performance Computing. O'Reilly. ISBN 1565920325. ^ Vincent, James (9 March 2017). "Microsoft unveils new ARM server designs, threatening Intel's dominance". The Verge. Retrieved 12 May 2017. ^ Russell, John (31 May 2016). "Cavium Unveils ThunderX2 Plans, Reports ARM Traction is Growing". HPC Wire. Retrieved 8 March 2017. ^ AMD's first ARM-based processor, the Opteron A1100, is finally here, ExtremeTech, 14 January 2016, retrieved 14 August 2016 ^ Feldman, Michael (18 January 2017). "Cray to Deliver ARM-Powered Supercomputer to UK Consortium". Top500.org. Retrieved 12 May 2017. ^ "Microsoft is bringing Windows desktop apps to mobile ARM processors". The Verge. Vox Media. 8 December 2016. Retrieved 8 December 2016. ^ "How x86 emulation works on ARM". Microsoft Docs. 15 February 2018. ^ "Apple announces Mac transition to Apple silicon" (Press release). Cupertino, California: Apple Inc. 22 June 2020. Retrieved 18 July 2020. ^ Srinivasan, Sundar (2009). "Intel x86 Processors – CISC or RISC? Or both??". ^ Carter, Nicholas P. (2002). Schaum's Outline of Computer Architecture. p. 96. ISBN 0-07-136207-X. ^ Jones, Douglas L. (2000). "CISC, RISC, and DSP Microprocessors" (PDF). ^ Singh, Amit. "A History of Apple's Operating Systems". the line between RISC and CISC has been growing fuzzier over the years ^ "Top 500 The List: November 2018". TOP 500. Retrieved 22 November 2018. ^ Dandamudi 2005, pp. 121–123 ^ DeAngelis, Marc (22 June 2020). "Apple starts its two-year transition to ARM this week". Engadget. Retrieved 24 August 2020. Apple has officially announced that it will be switching from Intel processors to its own ARM-based, A-series chips in its Mac computers. ^ Bennett, Amy (2005). "Apple shifting from PowerPC to Intel". Computerworld. Retrieved 24 August 2020. External links[edit] "RISC vs. CISC". RISC Architecture. Stanford University. 2000. "What is RISC". RISC Architecture. Stanford University. 2000. Savard, John J. G. "Not Quite RISC". Computers. Mashey, John R. (5 September 2000). "Yet Another Post of the Old RISC Post [unchanged from last time]". Newsgroup: comp.arch. Usenet: 8p20b0$dhh$3@murrow.corp.sgi.com. Nth re-posting of CISC vs RISC (or what is RISC, really) v t e Reduced instruction set computer (RISC) architectures IBM 801 Berkeley RISC Stanford MIPS Active Analog Devices Blackfin ARC ARM AVR eSi-RISC LatticeMico8 LatticeMico32 MIPS OpenRISC Power ISA Renesas M32R Renesas SuperH Renesas V850 RISC-V Sunway SPARC Unicore Xilinx MicroBlaze Xilinx PicoBlaze Historic Alpha AMD Am29000 Apollo PRISM Atmel AVR32 Clipper CRISP DEC Prism Intel i860 Intel i960 Meta MIPS-X Motorola 88000 Motorola M·CORE PA-RISC ROMP POWER PowerPC v t e Processor technologies Models Turing machine Universal Post–Turing Quantum Belt machine Stack machine Finite-state machine with datapath Hierarchical Queue automaton Register machines Counter Pointer Random-access Random-access stored program Architecture Microarchitecture Von Neumann Harvard modified Dataflow Transport-triggered Cellular Endianness Memory access NUMA HUMA Load/store Register/memory Cache hierarchy Memory hierarchy Virtual memory Secondary storage Heterogeneous Fabric Multiprocessing Cognitive Neuromorphic Instruction set architectures Types CISC RISC Application-specific EDGE TRIPS VLIW EPIC MISC OISC NISC ZISC Comparison Addressing modes Instruction sets x86 ARM MIPS Power ISA SPARC Itanium Unicore MicroBlaze RISC-V LMC Others Execution Instruction pipelining Pipeline stall Operand forwarding Classic RISC pipeline Hazards Data dependency Structural Control False sharing Out-of-order Tomasulo algorithm Reservation station Re-order buffer Register renaming Speculative Branch prediction Memory dependence prediction Parallelism Level Bit Bit-serial Word Instruction Pipelining Scalar Superscalar Task Thread Process Data Vector Memory Distributed Multithreading Temporal Simultaneous Hyperthreading Speculative Preemptive Cooperative Flynn's taxonomy SISD SIMD SWAR SIMT MISD MIMD SPMD Processor performance Transistor count Instructions per cycle (IPC) Cycles per instruction (CPI) Instructions per second (IPS) Floating-point operations per second (FLOPS) Transactions per second (TPS) Synaptic updates per second (SUPS) Performance per watt (PPW) Cache performance metrics Computer performance by orders of magnitude Types Central processing unit (CPU) Graphics processing unit (GPU) GPGPU Vector Barrel Stream Coprocessor ASIC FPGA CPLD Multi-chip module (MCM) System in package (SiP) By application Microprocessor Microcontroller Mobile Notebook Ultra-low-voltage ASIP Systems on chip System on a chip (SoC) Multiprocessor (MPSoC) Programmable (PSoC) Network on a chip (NoC) Hardware accelerators AI accelerator Vision processing unit (VPU) Physics processing unit (PPU) Digital signal processor (DSP) Tensor processing unit (TPU) Secure cryptoprocessor Network processor Baseband processor Word size 1-bit 4-bit 8-bit 12-bit 15-bit 16-bit 24-bit 32-bit 48-bit 64-bit 128-bit 256-bit 512-bit bit slicing others variable Core count Single-core Multi-core Manycore Heterogeneous architecture Components Core Cache CPU cache replacement policies coherence Bus Clock rate Clock signal FIFO Functional units Arithmetic logic unit (ALU) Address generation unit (AGU) Floating-point unit (FPU) Memory management unit (MMU) Load–store unit Translation lookaside buffer (TLB) Integrated memory controller (IMC) Logic Combinational Sequential Glue Logic gate Quantum Array Registers Processor register Status register Stack register Register file Memory buffer Program counter Control unit Instruction unit Data buffer Write buffer Microcode ROM Counter Datapath Multiplexer Demultiplexer Adder Multiplier CPU Binary decoder Address decoder Sum addressed decoder Barrel shifter Circuitry Integrated circuit 3D Mixed-signal Power management Boolean Digital Analog Quantum Switch Power management PMU APM ACPI Dynamic frequency scaling Dynamic voltage scaling Clock gating Performance per watt (PPW) Race to sleep Related History of general-purpose CPUs Microprocessor chronology Processor design Digital electronics Hardware security module Semiconductor device fabrication Tick–tock model Authority control BNF: cb12473275r (data) GND: 4191875-7 LCCN: sh90005948 Retrieved from "https://en.wikipedia.org/w/index.php?title=Reduced_instruction_set_computer&oldid=991063599" Categories: Classes of computers Instruction set architectures Hidden categories: Articles with short description Short description is different from Wikidata Use dmy dates from August 2016 Wikipedia articles that are too technical from October 2016 All articles that are too technical Articles containing potentially dated statements from June 2020 All articles containing potentially dated statements Articles needing additional references from March 2012 All articles needing additional references All articles with unsourced statements Articles with unsourced statements from June 2011 Articles containing potentially dated statements from November 2018 Wikipedia articles with BNF identifiers Wikipedia articles with GND identifiers Wikipedia articles with LCCN identifiers Navigation menu Personal tools Not logged in Talk Contributions Create account Log in Namespaces Article Talk Variants Views Read Edit View history More Search Navigation Main page Contents Current events Random article About Wikipedia Contact us Donate Contribute Help Learn to edit Community portal Recent changes Upload file Tools What links here Related changes Upload file Special pages Permanent link Page information Cite this page Wikidata item Print/export Download as PDF Printable version In other projects Wikimedia Commons Languages العربية Azərbaycanca Беларуская (тарашкевіца)‎ Български Bosanski Català Čeština Dansk Deutsch Eesti Ελληνικά Español Euskara فارسی Français Gaeilge 한국어 Hrvatski Bahasa Indonesia Italiano עברית Latviešu Lietuvių Lumbaart Magyar Nederlands 日本語 Norsk bokmål Norsk nynorsk Polski Português Română Русский Simple English Slovenčina Slovenščina Српски / srpski Srpskohrvatski / српскохрватски Suomi Svenska ไทย Türkçe Українська Tiếng Việt 吴语 中文 Edit links This page was last edited on 28 November 2020, at 02:01 (UTC). Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Mobile view Developers Statistics Cookie statement