The XCore Architecture is a 32-bit RISC microprocessor architecture designed by XMOS. The architecture is designed to be used in multi-core processors for embedded systems. Each XCore executes up to eight concurrent threads, each thread having its own register set, and the architecture directly supports inter-thread and inter-core communication and various forms of thread scheduling. Two versions of the XCore architecture exist: the XS1 architecture and the XS2 architecture. Processors with the XS1 architecture include the XCore XS1-G4 and XCore XS1-L1. Processors with the XS2 architecture include xCORE-200. The architecture encodes instructions compactly, using 16 bits for frequently used instructions (with up to three operands) and 32 bits for less frequently used instructions (with up to 6 operands). Almost all instructions execute in a single cycle, and the architecture is event-driven in order to decouple the timings that a program needs to make from the execution speed of the program. A program will normally perform its computations and then wait for an event (e.g. a message, time, or external I/O event) before continuing.
There are thre versions of the xCORE architecture: XS1, XS2, and XS3; XS2 extends the XS1 architecture, and XS3 extends the XS2 architecture.
The XS1 architecture was the first xCORE architecture, defined in 2007. It is implemented by the XCore XS1-G4, XCore XS1-L1, XCore XS1-SU, and XCore XS1-AnA. The former is a four-core processing node, the latter three are single and dual core processing nodes.
The XS2 architecture was defined in 2015. It is implemented by the xCORE-VOICE processors and xCORE-200 series processors. The latter are marketed as the XL2 series (general purpose), XU2 series (USB), XE2 series (RGMII), and versions with embedded flash.
XS2 extends the XS1 architecture with a limited form of Dual Issue execution.[1] The processor core comprises two lanes. The Resource lane can execute IO operations and general arithmetic. The Memory lane can execute memory operations, branches, and general arithmetic. Short resource or arithmetic instructions can be executed in the resource lane; short memory, branch, or arithmetic operations can be executed in the memory lane. Long instructions span both lanes. In dual issue mode all pairs of instructions are aligned on a 32-bit boundary.
A few instructions have been added to aid in high bandwidth processing, such as dual-word load/store, dual-word zip and unzip (bit and byte strings), dual word arithmetic saturation and shift.
The XS3 architecture was introduced in 2020. It is implemented by a new series of xcore.ai processors aimed at embedded and IoT devices utilizing AI acceleration in SoC-like designs.
XS3 extends the XS2 architecture with enhanced DSP performance, new 32-bit floating-point capability, and 256-bit-wide vector instructions. Processors based on this architecture also support a two-lane MIPI interface for cameras or other sensor input, as well as a 16-bit-wide LPDDR interface for external memory. Core clock speed has been increased to 800 MHz, and up to four xCONNECT links are available, providing scalability through the connection of additional xCORE processors.[2]
Of particular note, the new vector capability provides up to a theoretical maximum of 51 GOPS (billion operations per second, e.g. multiply-accumulate) of AI performance using 8-bit data, and 408 GOPS on binarized (1-bit) neural networks.[3][4]
The architecture comprises a central execution unit that operates on a set of 25 registers, and surrounded by a number of resources that perform operations that interact with the environment. Each thread has its own set of hardware registers, enabling threads to execute concurrently. The instruction set comprises both a (more or less standard) sequential programming model, and instructions that implement multi-threading, multi-core and I/O operations.
Most instructions can access only the 12 general-purpose registers r0–r11. In general, they are completely interchangeable, except that some instructions use r11 implicitly. There are also 4 base registers usable by some instructions:
Registers 16 through 24 are only accessible to specialized instructions. Except for the first two (r16 = pc = program counter, r17 = sr = status register), they are dedicated to exception and interrupt handling.
The status register contains various mode bits, but the processor does not have the standard ALU result flags like carry, zero, negative or overflow. Add and subtract with carry instructions exist, but specify five operand registers: two inputs and input carry, and one output and output carry.
Most instructions are 16-bit while a few have 32-bit encoding. Instructions can use between zero and six operands. Most common arithmetic operations (such as ADD, SUB, MULT) are three-operand instructions based on a set of 12 general purpose registers.
1 5 |
1 4 |
1 3 |
1 2 |
1 1 |
1 0 |
9 |
8 |
7 |
6 |
5 |
4 |
3 |
2 |
1 |
0 |
Description |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
opcode | immediate | 10/20-bit immediate | ||||||||||||||
opcode | register | immediate | register & 6/16-bit immediate | |||||||||||||
opcode | 1 1 | opc | immediate | 6/16-bit immediate | ||||||||||||
opcode | 9×a+3×b+c | a a | b b | c c | three-operand register | |||||||||||
opcode | 27+3×b+c | ∗ | o | b b | c c | two-operand register | ||||||||||
opcode | 1 1 1 1 1 1 | o | c c c c | one-operand register | ||||||||||||
opcode | 1 1 1 1 1 1 | o | 1 1 | opc | zero-operand |
The last four forms share the same opcode range, because the number of operands is determined by bits 5 through 10. The last three forms use bit 4 as an additional opcode bit. (And the last form uses bits 1 and 0 as well.)
In the second form, some instructions (loads and stores) use all four bits to encode the register number, allowing access to r12–r15. Other instructions (conditional branches) do not allow register numbers above 11, instead allowing the third form to share the opcode range.
Because constants are always unsigned, many instructions come in add/subtract pairs, e.g. jump forward and backward.
The form of an instruction is determined by its four most-significant bits:
The encoding of the three-operand register opcodes is quite unusual, since 12 registers is not a power of 2. The encoding used fits zero to three operands, and the number of operands, into 11 bits. Thus, each 5-bit opcode can be assigned four times, once to a three-operand instruction, once to a two-operand, etc.
In all cases, the low 2 bits of the register number are placed in a 2-bit field, reducing the problem to encoding the high bits, which are in the range of 0 to 2.
The three-operand form places the low register numbers in the low 6 instruction bits. The high 2 bits of each register number are combined in base-3 into a number between 0 and 26 (using 9×a+3×b+c) and stored in the remaining 5 bits.
The two-operand form uses the unused 5 combinations (27–31) in the 5-bit field. Operand a is not used, and the 2-bit field for its low bits is reassigned; one bit is used for an additional opcode bit, and the other is used as an additional combination register specifier, doubling the number of available combinations to 10, and allowing all 9 combinations of 3×b+c to be represented. This is done in a manner similar to bi-quinary coded decimal: the combination, modulo 5, is stored in the 5-bit field (as (3×b+c) mod 5 + 27), and the 1-bit quotient (⌊(3×b+c)/5⌋) is stored in instruction bit 5 (marked with an asterisk in the table above).[5]
One-operand instructions use the tenth combination value, with all 6 bits set, and place the register number in the 4 available bits. Only operand c is specified, and the high bits are stored in the b field.
Finally, the one-operand encoding, with a register number 12 or more (the b field contains binary 11), is also used to encode zero-operand instructions. The two low-order bits of the c field are available for additional opcode bits (bringing the total to 8).
(A few instructions use the register c field value 0–11 as a small immediate constant, or use it to select one of 12 convenient bit-shift constants 0–8, 16, 24, or 32.)
Less frequently used instructions are encoded in 32 bits. 32-bit instructions allow 16- or 20-bit immediate operands (such as far branches), up to six register operands (for example long multiply which has four source and two destination operands) and additional opcode space for rarely used instructions.
One 10-bit immediate opcode (PFIX, opcode 111100) is used to add an additional 10 bits to the 6- or 10-bit immediate in the following instruction.
One three-operand opcode (EOPR, opcode 11111) is reserved for an "additional operands" prefix. Its 3 operands are used along with those of the following instruction word to produce additional 32-bit instructions with up to six operands. This is also used for rarely used three- and two-operand instructions; in such cases the EOPR specifies all three or two operands, and the following instruction word is a zero-operand instruction. (In the two-operand case, the extra opcode bit in the leading EOPR is used.)
Each thread has access to 12 general purpose registers R0...R11. In addition there are 4 special purpose registers the SP, LR (Link register - stores the return address), CP (constant pool, points to a part of memory that stores constants) and DP (data pool - points to global variables). In addition to those 16 there are another 9 registers that store the PC, kernel PC, Exception type, Exception data, and saved copies of all those in case of an exception or interrupt.[6] The instruction set is a load-store instruction set.
Almost all instructions execute in a single cycle. If an instruction does not need data from memory (for example, arithmetic operations), the instruction will prefetch a word of instructions. Because most instructions are encoded in 16-bits, and because most instructions are not loads or stores (a typical number is 20% loads&stores, 80% other instructions[7]), the prefetch mechanism can stay ahead of the instructions stream. This acts like a very small instruction cache, but its behaviour can be predicted at compile time, making timing behaviour as predictable as functional behaviour.
Instructions that access memory all use a base register: SP, DP, CP, PC or any general purpose register. In a single 16-bit instruction a thread can access:
Larger sections of memory can be accessed by means of extended instructions, which extend the above ranges to 64 KBytes.
This scheme is designed in order to densely encode the common cases found in many programming patterns: access to small stack frames, a small set of globals and constants, structures, and arrays. Access to bit fields that have an odd length is facilitated by means of sign and zero extend instructions.
All common arithmetic instructions are provided - including a divide and remainder (which are the only instructions that are not single cycle). Comparison instructions compute a truth value (0 or 1) into a register, avoiding the use of flags. Many instructions have an immediate version that allows a single operand with a value of between 0 and 11 inclusive, encoding many common cases such as "i = i + 1". In the case of bit operations such as shift, the immediate value encodes common cases. Extra instructions are provided for reversing bits and bytes, count leading zeros, digital signal processing, and long integer arithmetic.
The branch instructions include conditional and unconditional relative branches. A branch using the address in a register is provided; a relative branch which adds a scaled register operand to the program counter is provided to support jump tables. Branches to up to instructions distance are encoded in a single word. The procedure calling instructions include relative calls, calls via the constant pool, indexed calls via a dedicated register and calls via a register. Most calls within a single program module can be encoded in a single instruction; inter-module calling requires at most two instructions. It is up to the callee to save the link-register if it is not a leaf-function, a single instruction extends the stack and saves the link register.
Dual issue mode, available on XS2, enables one short load, store, or branch instruction to be paired with one short resource instruction. Short arithmetic instructions can be paired with any instruction. This enables inner-loops that, for example, transfer data from memory to IO to be halved in length by issuing the LOAD instruction together with the ADD instruction, and the change to the counter together with the branch instruction.
The XS1 instruction set is designed to support both multi threading and multi-core computations. To this extent it supports channel communication (to support distributed memory computations) and barriers and locks (to support shared memory computations). A thread initiates execution on one or more newly allocated threads by setting their initial register values.
Communication between threads is performed using channels that provide full-duplex data transfer between channel-ends. This enables, amongst others, the implementation of CSP based languages, languages based on the Pi calculus. The instruction set is agnostic as to where a channel is connected to - whether that is inside a core or outside the core. Channels carry messages constructed from data and control tokens between the two channel ends. The control tokens can be used to encode communication protocols.
Channel ends have a buffer able to hold sufficient tokens to allow at least one word to be buffered. If an output instruction is executed when the channel is too full to take the data then the thread which executed the instruction is paused. It is restarted when there is enough room in the channel for the instruction to successfully complete. Likewise, when an input instruction is executed and there is not enough data available then the thread is paused and will be restarted when enough data becomes available.
A thread can, with a single instruction, synchronise with a group of threads using a barrier synchronisation. Alternatively a thread can synchronise using a lock, providing mutual exclusion. In order to communicate data when using barriers and locks, threads can either write data into the registers of another thread, or they can access memory of another thread (provided both threads execute on the same core). If shared memory is used, then the compiler or the programmer must ensure that there are no race conditions.
The XS2 architecture has a 'priority mode' that enables threads to run in high priority. Low priority threads are guaranteed progress, but high priority threads are guaranteed a thread cycle when they are ready to execute.
The XS1 architecture is event-driven. It has an instruction that can dispatch on external events in addition to traditional interrupts. If the program chooses to use events, then the underlying processor has to expect an event and wait in a specific place so that it can be handled synchronously. If desired, I/O can be handled asynchronously using interrupts. Events and interrupts can be used on any resource that the implementation supports.
Common resources that are supported are ports (for external input and output), timers (that allow timing to a reference clock), channels (that allow communication and synchronization between threads within a core, and threads on different cores), locks (which allow controlled access to shared memory), and synchronizers (which implement barrier synchronizations between threads).
The content is sourced from: https://handwiki.org/wiki/XCore_Architecture