memory - "Faster" RAM at lower clock speeds?

03
2014-05
  • Questioner

    I've been kind of interested in the mechanics (er, electronics) of computer systems lately and after a bunch of research and looking at my computer's properties, I've come across something strange.

    Most people say faster RAM means, well, faster RAM. Sounds logical right? But after looking at my computer I noticed that my installed RAM is capable of being under clocked. It usually runs at 333 Mhz (DDR2 at 667) with a 5-5-5-15 timing. However one of the programs I'm using to look into my PC says that it is capable of working at 266 Mhz with 4-4-4-12 timing and 200 Mhz with 3-3-3-9 timing.

    The thing is, according to my calculations (simply the timing number divided by the clock frequency to get latency in seconds), 200 Mhz at 3-3-3-9 timing actually has better latency than 333 Mhz at 5-5-5-15 timing.

    So my question is: Is this in fact true that I can actually improve the performance of my system if a program I run is accessing the memory in a truly random fashion (as opposed to sequential read/writes) by under clocking the RAM and selecting a tighter timing or have I made an error somewhere?

    Edit: Just before you start arguing that I'm mistaken about RAM "speed", let me define what I mean by "faster". RAM has both latency and bandwidth. When I say "faster" I am strictly talking about latency and not bandwidth. In sequential read/writes, yes, bandwidth is much more important than latency (RAM operates in burst mode, which achieves it's maximum bandwidth by pumping sequential rows of data into CPU cache even if the CPU never asked for the extra stuff). In random access however, latency totally out rules bandwidth.

  • Answers
  • Racter

    So my question is: Is this in fact true that I can actually improve the performance of my system if a program I run is accessing the memory in a truly random fashion (as opposed to sequential read/writes) by under clocking the RAM and selecting a tighter timing or have I made an error somewhere?

    This is hard to answer as there are many variable to consider. In theory you should be able to improve performance of just those programs. This assumes that memory is highly fragmented or you are reading/writing small amounts of data. Also note that your overall system performace may degrade. Best thing to do is give it a try as it is a very simple test assuming your BIOS provides access to those settings.

  • user10547
  • Bigbio2002

    Typically, you'd gain more benefit by higher MHz vs. lower CAS timings. In a general case, even though your CAS timings may be increased to 5-5-5-15 from 4-4-4-12, for example, the extra 133MHz of clock speed gained will allow the memory to go through those CAS cycles in less time, thereby being "faster" in terms of random access.

    However, it seems that you've stumbled upon an edge case where the lower CAS timings take less time than the higher CAS timings, despite the lower clock speed. In theory, I suppose that a 100% random workload would perform better in this scenario if your math works out. But like others have said, there are other factors to consider (motherboard, etc.), and this would only apply for an entirely 100% random workload that only reads a single word at a time. For the case that you defined, the difference is marginal as it is. Anything aside from that hypothetical random workload would have less performance than if it were running with the RAM modules at a higher clock speed.

    In the real world, when there's a tradeoff, go for the higher MHz (or registered modules, or whatever applies to your need).


  • Related Question

    memory - RAM access speeds, latancy vs bandwith
  • Questioner

    I'm a bit confused about RAM speeds, latency, and transfer rates.

    From what i can make out so far, RAM is rated on it's clock speed and latency. There are a few different latency measurements (the string of 4 numbers, eg, 5-5-5-18), however the only really important number is the last one which measures the overall latency between the access of data between two "random" areas of memory (please correct me if I'm wrong).

    My question is this:

    how would you calculate the actual RAM latency (ie, in nanoseconds). Is it the tRAS divided by the RAM clock speed or is it the tRAS divided by the processor speed (which doesn't sound right to me, processor shouldn't affect the RAM access like that) or is it something totally different?

    Also, how does duel channel and triple channel affect the RAM latency (from what i can gather, it doesn't, it just affects bandwidth) and how exactly does it work? Is it just basically something like striping with RAID for hard drives?

    Lastly, is there any difference between access speeds for reading and writing? Does writing take longer and, if so, how is that reflected in the latency timings, or even if it is.

    Thanks

    -Faken


  • Related Answers
  • Jeff Atwood

    the string of 4 numbers, eg, 5-5-5-18

    Memory timings are specified through a series of numbers:

    2-3-2-6-T1
    3-4-4-8
    2-2-2-5

    These numbers indicate the amount of clock cycles it takes the memory to perform a certain operation. The smaller the number, the faster the memory.

    CL-tRCD-tRP-tRAS-CMD

    • CL: CAS Latency. The time it takes between a command having been sent to the memory and when it begins to reply to it. It is the time it takes between the processor asking for some data from the memory and it returning it.
    • tRCD: RAS to CAS Delay. The time it takes between the activation of the line (RAS) and the column (CAS) where the data are stored in the matrix.
    • tRP: RAS Precharge. The time it takes between disabling the access to a line of data and beginning access to another line of data.
    • tRAS Active to Precharge Delay**. How long the memory has to wait until the next access to the memory can be initiated.
    • CMD: Command Rate. The time it takes between the memory chip being activated and when the first command may be sent. Sometimes this value is not provided. It's usually T1 (1 clock cycle) or T2 (2 clock cycles).

    CAS latency is arguably the most important number. Memory with CL = 3 will delay three clock cycles to deliver data; memory with CL = 5 will delay five clock cycles to perform the same operation.

    The period of each clock cycle can be calculated:

    T = 1 / f

    Say you had DDR2-533 memory running at 533 MHz (266 MHz actual clock), that means the clock period is 3.75 ns. If this DDR2-533 memory has CL=5, it would delay 18.75 ns before delivering data, if it had CL=3, it would delay 11.25 ns.

    Bear in mind memory also implements burst modes, so if the next requested data address is sequential from the first, there are no delays in getting to the "next" data.

    Is it just basically something like striping with RAID for hard drives?

    I believe so, yes. Dual and Triple channel (memory must be installed in pairs or triples) are about bandwidth.