computer architecture - Difference between “system-on-chip” and “CPU”

07
2014-07
  • Questioner

    Very confused, in some websites, they have this line:

    iPhone 5s
    
    CPU: Apple A7
    

    other websites saying that:

    iPhone 5s
    System-on-chip: Apple 7
    CPU: 1.3 GHz 64bit dual core
    

    other sources saying that

    iPhone 5s
    System-on-chip: Apple 7
    CPU: 1.3 GHz 64bit dual core Apple 7
    

    In Wikipedia, it said:

    The Apple A7 is a 64-bit system on a chip (SoC) designed by Apple Inc. It first appeared in the iPhone 5S, which was introduced on September 10, 2013. Apple states that it is up to twice as fast and has up to twice the graphics power compared to its predecessor, the Apple A6. While not the first 64-bit ARM CPU, it is the first to ship in a consumer smartphone or tablet computer.

    There are 2 sentences:

    The Apple A7 is a 64-bit system on a chip (SoC)

    and

    While not the first 64-bit ARM CPU

    Wikipedia also said “The A7 features an Apple-designed 64-bit 1.3–1.4 GHz ARMv8-A dual-core CPU, called Cyclone”. So System on chip is also CPU? very confused

  • Answers
  • Dougvj

    The confusion stems from the fact that a System on a Chip ALWAYS contains a CPU. Traditionally computers are build by various discrete components, among them are the following simplified examples:

    • CPU (Central Processing Unit) (Handles execution of code, decisions, manages hardware)

    • FPU (Floating-point Unit)- Math Coprocessor for Floating point math.

    • RAM (Random Access Memory) - Used as storage for the CPU in doing calculations and processing

    • GPU (Graphics Processing Unit) - Co-Processor for dealing with 2d and 3d graphics

    • I/0 (Input/Ouput) - Unit for input and output devices such as keyboards and printers.

    As you can see, a CPU is an important part of a system, but not the only part. When we refer to a System on a Chip, All or most of the above components are integrated into a single chip. We can talk about any particular component of this SoC, such has how much RAM it has, the capabilities of its GPU, and, of course, the CPU architecture and layout.

    Because in a SoC the individual components are generally not given their own unique names, the name of the SoC will often be used to refer to the CPU component. Therefore, in Wikipedia the CPU of the Apple A7 SoC is also referred to as the A7.


  • Related Question

    memory - How does the CPU write infomation to ram?
  • Questioner

    My question is, how does the CPU write data to ram?

    From what I understand, modern CPU's use different levels of cache to speed up ram access. The RAM gets a command for information and then sends a burst of data to the CPU which stores the required data (and a bunch of extra data that was close to the address the CPU wanted) into the highest level cache, the CPU then progressively asks the different caches to send smaller and smaller chunks of data down the levels of caches until it is in the level 1 cache which then gets read directly into a CPU register.

    How does this process work when the CPU writes to memory? Does the computer go backwards down the levels of cache (in reverse order as compared to read)? If so, what about synchronizing the information in the different caches with the main memory? Also, how is the speed of a write operation compared to a read one? What happens if I'm continuously writing to RAM, such as in the case of a bucket sort?

    Thanks in advance,

    -Faken

    Edit: I still haven't really gotten an answer which I can fully accept. I want to know especially about the synchronization part of the RAM write. I know that we write to the L1 cache directly from CPU and that data gets pushed down the cache levels as we synchronize the different levels of caches and eventually the main RAM gets synchronized with the highest tier cache. However, what i would like to know is WHEN do caches synchronize and scynocronize with main RAM and how fast are their speeds in relation to read commands.


  • Related Answers
  • Skizz

    Ah, this is one of those simple questions that have really complex answers. The simple answer is, well, it depends on how the write was done and what sort of caching there is. Here's a useful primer on how caches work.

    CPUs can write data in various ways. Without any caching, the data is stored in memory straightaway and the CPU waits for the write to complete. With caching, the CPU usually stores data in program order, i.e. if the program writes to address A then address B then the memory A will be written to before memory B, regardless of the caching. The caching only affects when the physical memory is updated, and this depends on the type of caching used (see the above link). Some CPUs can also store data non-temporally, that is, the writes can be re-ordered to make the most of memory bandwidth. So, writing to A, then B, then (A+1) could be reorderd to writing to A then A+1 in a single burst, then B.

    Another complication is when more than one CPU is present. Depending on the way the system is designed, writes by one CPU won't be seen by other CPUs because the data is still in the first CPUs cache (the cache is dirty). In multiple CPU systems, making each CPU's cache match what is in physical memory is termed cache consistancy. There are various ways this can be acheived.

    Of course, the above is geared towards Pentium processors. Other processors can do things in other ways. Take, for example, the PS3's Cell processor. The basic architecture of a Cell CPU is one PowerPC core with several Cell cores (on the PS3 there are eight cells one of which is always disabled to improve yields). Each cell has its own local memory, sort of an L1 cache which is never written to system RAM. Data can be transferred between this local RAM and system RAM using DMA (Direct Memory Access) transfers. The cell can access system RAM and the RAM of other cells using what appears to be normal reads and writes but this just triggers a DMA transfer (so it's slow and really should be avoided). The idea behind this system is that the game is not just one program, but many smaller programs that combine to do the same thing (if you know *nix then it's like piping command line programs to achieve more complex tasks).

    To sum up, writing to RAM used to be really simple in the days when CPU speed matched RAM speed, but as CPU speed increased and caches were introduced, the process became more complex with many different methods.

    Skizz

  • Am1rr3zA

    yes it's go backwards down the levels of cache and save to memory but the important note is in Multi Processing system the cache are shared between 2 or more processor(core) and the data must be consistent this was done by make shared cache for all multiprocessor or different cache but save consistency by use of Critical section (if data in one cache changed it force it to write in memory and update other cache)