memory - What does the MHz of RAM really mean?

07
2014-07
  • Axel Kennedal - TechTutor

    Countless of times I've heard and read that RAM-memory can have different speeds - denoted as MHz (e.g. 1066 MHz). However, what this frequency really is has never been explained to me and I'm having trouble finding an answer. My best guess is that - since frequency basically means "how many times per second" - the MHz means how many times per second the RAM can communicate with the CPU. Please do correct me if I am wrong. Also: how can you put this in a relationship to the size of the data being processed per second? E.g. how much data in mega-/kilobytes are sent to the CPU from the RAM per second in a scenario where its being pushed to the limit?

  • Answers
  • Indrek

    Yes, it's the maximum number of clock cycles per second that the RAM operates on. With Double Data Rate (DDR) RAM, it actually communicates twice per cycle. So for DDR:

    200 MHz clock rate × 2 (for DDR, 1 for SDR) × 8 Bytes = 3,200 MB/s bandwidth

    This is why chips are now named for their bandwidth, not their frequency alone. Above chip module is called PC-3200, not 200 Mhz. It's still necessary to know the clock rate, to ensure that the motherboard/CPU can operate at that clock.

    See the Wikipedia article on DDR SDRAM for more information.


  • Related Question

    cpu - Relationship between RAM & processor speed
  • deostroll

    RAM is just used for temporary storage. But since this storage is in the cpu memory (RAM) it is fast. Programs can easily read/write values into it. I've noticed more the RAM less time it takes for the application to load/execute. But doesn't this actually depend of the processor speed (MHz or GHz values). I am wondering what is the science/relationship between processor speed and RAM.


  • Related Answers
  • caliban

    I believe you are referring to IO operations for processing purposes, and I'll attempt to give a simplified layman answer.

    Assume the processor is a meat-grinder in a factory, and assume RAM, hard disk are like the conveyor belt system feeding unprocessed meat to the grinder to be ground.

    Assume the conveyor belt has two parts -> the slow-but-wide part, and the fast-but-narrow part. The former alludes to the hard disk big storage but slow speed, and the latter is referring to memory's small storage but high speed characteristics.

    So...

    HARD DISK CONVEYOR (WIDE BUT SLOW) -> RAM CONVEYOR (NARROW BUT FAST) -> GRINDER (PROCESSOR)

    When your increase your RAM, it is like widening the RAM conveyor, thus the grinder can potentially receive much more at one go for processing.

    If your RAM is low, it means that while the RAM conveyor is fast, it is extremely narrow, thus the volume of meat pouring into the grinder is little. At the same time, meat might potentially choke at the hard disk conveyor points (in short meat that is supposed to be on the RAM conveyor in a well-optimized system is actually still on the hard disk conveyor - a.k.a paging/swap file).

    To sum an answer all up in a hopefully easy to understand sentence :

    The relationship between RAM and processor and why programs run faster is simply because with more RAM, more data to be processed can get to the processor faster.

    If the size of the system memory is equivalent to how wide the RAM conveyor is, then the Frontside Bus (FSB) is equivalent to how fast the RAM conveyor goes.

    Whew! Hope this answers your question!

  • harrymc

    I believe that the scientific equation is really a function of the program's behavior. It's best understood if we over-simplify a bit :

    • If the program is disk-intensive, speed is proportional to the disk.
    • If the program is oriented towards calculations : speed is (mostly) proportional to the CPU, since the memory cache nowadays is pretty intelligent and fast.
    • Else speed is (mostly) proportional to memory.

    Summary: For every intensively active program, there is a bottle-neck. Even with professional tools, it's not always easy to analyze which component is to blame. After discussing with the administrator for a very large database, it seems like the idea is to improve one machine bottleneck after another, because with each improvement the behavior may change. This isn't exact science, because the hardware is too complex : see Intel's 8-core CPUs will have 2.3 billion transistors.