On the specs of a computer I saw, it said the CPU speed was 2.something Gigahertz, but the RAM speed was 1300MHz (1.3GHz). That does not make sense, why would the RAM be slower than the CPU? You could never use the full speed of the CPU, could you? Somebody, please explain. I'm stumped with this one.
Any information would be appreciated.
tl;dr You're fine, you can fully utilize your processor and you will not have any troubles with performance. A new motherboard/ram is not required.
CPUs have a cache on them, this is where all data access takes place. If there is data that is in memory but not in the cache, it has to be loaded into the cache first.
The speed of the ram has less to do with how long it takes to access it than the memory timings do. These timing specify exactly how many cycles it takes to access the RAM. You can see this Wikipedia Article for details about memory timings.
As Far as your CPU is concerned, it actually has a internal clock speed much less than 2 GHz, what gives you the 2 GHz effective clock speed is the CPU Multiplier. As long as your base clock speed is less than the speed for your ram, you're fine. For example, my i5 2500k runs at 3.6GHz, its base clock speed is 100hz, and its multiplier is 36.
Another thing to be aware of, is that your ram isn't actually running at 1600hz, is running at 200hz. You can get a table of that info here. But as I said above, that 200hz is higher than the base clock speed of 100hz, so even on a good processor like my i5, 200hz is more than enough speed.
I remember sometime around 1995 having a computer with CPU speed of 75 MHz.
Then a couple of years later around 1997 having one that was 211 MHz.
Then a few years later around 2000 having one that was like 1.8 GHz, then around 2003 having one that was about 3 GHz.
Now almost 8 years later they are still maxed at 3 GHz. Is this because of Moore's Law?
The first thing, remember that Moore's Law isn't a law, it's just an observation. And it doesn't have to do with speed, not directly anyway.
Originally it was just an observation that component density pretty much doubles around every [time period], that's it, nothing to do with speed.
As a side effect, it effectively made things both faster (more things on the same chip, distances are closer) and cheaper (fewer chips needed, more chips per silicon wafer).
There are limits though. As chip design follows Moore's law and the components get smaller, new effects appear. As components get smaller, they get more surface area relative to their size, and the current leaks out, so it makes you need to pump more electricity into the chip. Eventually you lose enough juice that you make the chip hot and waste more current than you can use.
Though I'm not sure, this is probably the current speed limit, that the components are so small they're harder to make electronically stable. There's new materials to help this some, but until some wildly new material appears (diamonds, graphene) we're gonna get close to raw MHz speed limits.
That said, CPU MHz isn't computer speed, just like horsepower isn't speed for a car. There are a lot of ways to make things faster without a faster top MHz number.
Moore's law always referred to a process, that you can double density on chips at some regular repeating timeframe. Now it seems sub-20nm process may be stalled. New memory is being shipped on the same process as old memory. Yes, this is a single point, but it may be a harbinger of the future.
Moore's law describes a long-term trend in the history of computing hardware. The number of transistors that can be placed inexpensively on an integrated circuit has doubled approximately every two years. It's not about clock speed.
Also, a CPU's clock speed is not a reliable indicator of its processing power.
The faster the clock speed the larger the voltage drops need to be to make a coherent signal. The larger the voltage needs to spike up, the more power is required. The more power that is required, the more heat your chip will give off. This degrades the chips faster and slows them down.
At a certain point, it is simply not worth it to increase the clock speed any more, as the increased temperature would be more than it would be to add another core. This is why there is an increase in the number of cores.
By adding more cores, the heat goes up linearly. I.e. there is a constant ratio between clock speed and power draw. By making cores faster, there is a quadratic relationship between heat and clock cylces. When the two ratios are equal, its time to get another core.
This is independent of Moore's Law, but since the question is about the number of clock cycles, not the number of transistors, this explanation seems more apt. It should be noted that Moore's law does give limitations of it's own though.
EDIT: More transistors means more work is done per clock cycle. This happens to be a very important metric that sometimes gets overlooked (it is possible to have a 2Ghz CPU outperform a 3Ghz CPU) and this is a major area of innovation today. So even though clock speeds have been steady, processors have been getting faster in the sense that they can do more work per unit time.
EDIT 2: Here is an interesting link that has more information on related topics. You may find this helpful.
EDIT 3: Unrelated to the number of total clock cycles (number of cores * clock cycles per core) is the issue of parallelism. If a program cannot parallelize it's instructions, the fact that you have more cores means nothing. It can only use one at a time. This used to be a much larger problem than it is today. Most languages today support parallelism far more than they used to, and there are some languages (mostly functional programming languages) that have made it a core part of the language (see Erlang, Ada and Go as examples).
Moore's law predicted that the number of transistors would double every every 18 months. In the past, this meant that clock speeds could double. Once we got around 3 ghz, hardware makers realized that they were hitting up against speed of light limitations.
Remember how the speed of light is 299,792,458 meters/second? That means that on a 3ghz machine light will travel about a third of a meter each clock cycle. That's light traveling through air. Take into account that electricity is slower than that, and that gates and transistors are even slower and there's not much you can get done in that much time. As a result, clock speeds actually went down a little and instead hardware moved towards multiple cores.
Herb Sutter talked about this in his 2005 "Free Lunch is Over" article:
Silicon based chips have a general clock limit of 5 GHz or so before they literally start melting. There was research into using gallium arsenide (GaAs), which would allow chips to have higher clock rates, like up in the hundreds of GHz, but I'm not sure how far that got.
But Moore's Law has to do with transistors on a chip, not the performance or clock speed. And in that respect, I guess you could say that we're still keeping up with Moore's law by branching out into multiple processing cores still on the same chip.
According to the Wikipedia article on Moore's Law, it's expected to keep up until 2015.
If you want to know another way in which we can have faster processors at the same clock speeds, it also has to do with the number of instructions that can be carried out per clock pulse. That number has steadily increased over the years.
Timeline of instructions per second is a good chart of the number of instructions per clock cycle.
I am not an EE or Physics expert but I HAVE been buying computers roughly every three to four years since 1981 (in '81 I bought my first, a Sinclair ZX81 and three years later a Commadore 64, toys really, and then my first IBM clone in 1987) , so I have 30 years of "field data" on this subject.
Even using my first IBM clone in '87 as the starting point (which had 640k of RAM and a 32MB hard drive), by multiplying everything by two every 18 months I get 10GB of RAM today and a 1TB hard drive. DAMN CLOSE!!!! Just a little too much RAM and a little less HD than what sits on my desk today.
Considering that this "law" was obviously intended as a general expectation of the exponential growth of computer power into the future, I was frankly shocked at how accurate it was over essentially three decades. If only "civilian space travel", "personal robots" and "hover cars" had seen similar exponential growth. Pity.
But from a STRICTLY user's perspective, Moore's Law seems to be holding fast FOR NOW.
moderator condenses multiple answers:
Although Moore's law explicitly deals with the number of transistors in a microchip, this is but ONE SINGLE benchmark in a much, much larger world of technologies advancing at an exponential rate.
To get hung-up on clock-speeds misses the point. One only need look at PassMark CPU benchmarks: http://www.cpubenchmark.net/high_end_cpus.html, to see that computers are getting VASTLY more powerful EVERY DAY.
The number of transistors on a chip is simply one component in enhancing today's computer power.
Though I am not Moore nor do I know him, I'm guessing that in a broader sense his law was an attempt to predict the exponential increase in computing power. He choose "number of transistors on a chip" as a CONCRETE and most important, QUANTIFIABLE yardstick as opposed to a much more "ambiguous and difficult to prove" assertion that "computer power will double every couple of years". To prove his theory, clearly something that could be easily measured was needed as the yardstick. But I will go out on a limb here and suggest he was predicting a larger trend dealing with EVERY aspect of computers.
We can still make processors go faster with silicon (but not too much faster), but at this point it is cheaper / more efficient to just make processors (or their cores) smaller, and stuff more of them onto a die. Newer materials such as graphene blow silicon out of the water in terms of transistor switching speed, but we have yet to master the manufacturing process. Be patient, more speed will come, probably sooner than later.