laptop - CPU Clock, Hirens BootCD, Windows XP

07
2014-07
  • user338395

    I have an older laptop I am thinking of multiple uses for it. The laptop ran windows xp terribly slow so I did a fresh install of windows xp and it still runs terribly slow(yes i installed the correct drivers).

    I booted from Parted Magic on my Hirens BootCD and generated a system report and my CPU is detected clocked at 1.6 GHz (single core) AMD Athlon XP-M 2800+, has 256 megabytes of RAM, has a Nvidia GeForce 420M Go.

    Parted Magic is running the CPU at only 800 MHz and Windows XP reporting the same at 800 MHz, why is this and how do I fix it so this laptop regardless of age is running at full speed before I decide to do anything with it. (also some suggestions for its usage would be great once I get this straightened out)

  • Answers
    Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

    Related Question

    Has CPU speed already broken Moore's law?
  • JD Isaacks

    I remember sometime around 1995 having a computer with CPU speed of 75 MHz.

    Then a couple of years later around 1997 having one that was 211 MHz.

    Then a few years later around 2000 having one that was like 1.8 GHz, then around 2003 having one that was about 3 GHz.

    Now almost 8 years later they are still maxed at 3 GHz. Is this because of Moore's Law?


  • Related Answers
  • Rich Homolka

    The first thing, remember that Moore's Law isn't a law, it's just an observation. And it doesn't have to do with speed, not directly anyway.

    Originally it was just an observation that component density pretty much doubles around every [time period], that's it, nothing to do with speed.
    As a side effect, it effectively made things both faster (more things on the same chip, distances are closer) and cheaper (fewer chips needed, more chips per silicon wafer).

    There are limits though. As chip design follows Moore's law and the components get smaller, new effects appear. As components get smaller, they get more surface area relative to their size, and the current leaks out, so it makes you need to pump more electricity into the chip. Eventually you lose enough juice that you make the chip hot and waste more current than you can use.

    Though I'm not sure, this is probably the current speed limit, that the components are so small they're harder to make electronically stable. There's new materials to help this some, but until some wildly new material appears (diamonds, graphene) we're gonna get close to raw MHz speed limits.

    That said, CPU MHz isn't computer speed, just like horsepower isn't speed for a car. There are a lot of ways to make things faster without a faster top MHz number.

    LATE EDIT

    Moore's law always referred to a process, that you can double density on chips at some regular repeating timeframe. Now it seems sub-20nm process may be stalled. New memory is being shipped on the same process as old memory. Yes, this is a single point, but it may be a harbinger of the future.

  • soandos

    The faster the clock speed the larger the voltage drops need to be to make a coherent signal. The larger the voltage needs to spike up, the more power is required. The more power that is required, the more heat your chip will give off. This degrades the chips faster and slows them down.

    At a certain point, it is simply not worth it to increase the clock speed any more, as the increased temperature would be more than it would be to add another core. This is why there is an increase in the number of cores.

    By adding more cores, the heat goes up linearly. I.e. there is a constant ratio between clock speed and power draw. By making cores faster, there is a quadratic relationship between heat and clock cylces. When the two ratios are equal, its time to get another core.

    This is independent of Moore's Law, but since the question is about the number of clock cycles, not the number of transistors, this explanation seems more apt. It should be noted that Moore's law does give limitations of it's own though.

    EDIT: More transistors means more work is done per clock cycle. This happens to be a very important metric that sometimes gets overlooked (it is possible to have a 2Ghz CPU outperform a 3Ghz CPU) and this is a major area of innovation today. So even though clock speeds have been steady, processors have been getting faster in the sense that they can do more work per unit time.

    EDIT 2: Here is an interesting link that has more information on related topics. You may find this helpful.

    EDIT 3: Unrelated to the number of total clock cycles (number of cores * clock cycles per core) is the issue of parallelism. If a program cannot parallelize it's instructions, the fact that you have more cores means nothing. It can only use one at a time. This used to be a much larger problem than it is today. Most languages today support parallelism far more than they used to, and there are some languages (mostly functional programming languages) that have made it a core part of the language (see Erlang, Ada and Go as examples).

  • Zeki

    Moore's law predicted that the number of transistors would double every every 18 months. In the past, this meant that clock speeds could double. Once we got around 3 ghz, hardware makers realized that they were hitting up against speed of light limitations.

    Remember how the speed of light is 299,792,458 meters/second? That means that on a 3ghz machine light will travel about a third of a meter each clock cycle. That's light traveling through air. Take into account that electricity is slower than that, and that gates and transistors are even slower and there's not much you can get done in that much time. As a result, clock speeds actually went down a little and instead hardware moved towards multiple cores.

    Herb Sutter talked about this in his 2005 "Free Lunch is Over" article:

    http://www.gotw.ca/publications/concurrency-ddj.htm

  • Peter Mortensen

    Silicon based chips have a general clock limit of 5 GHz or so before they literally start melting. There was research into using gallium arsenide (GaAs), which would allow chips to have higher clock rates, like up in the hundreds of GHz, but I'm not sure how far that got.

    But Moore's Law has to do with transistors on a chip, not the performance or clock speed. And in that respect, I guess you could say that we're still keeping up with Moore's law by branching out into multiple processing cores still on the same chip.

    According to the Wikipedia article on Moore's Law, it's expected to keep up until 2015.

    If you want to know another way in which we can have faster processors at the same clock speeds, it also has to do with the number of instructions that can be carried out per clock pulse. That number has steadily increased over the years.

    Timeline of instructions per second is a good chart of the number of instructions per clock cycle.

  • studiohack

    I am not an EE or Physics expert but I HAVE been buying computers roughly every three to four years since 1981 (in '81 I bought my first, a Sinclair ZX81 and three years later a Commadore 64, toys really, and then my first IBM clone in 1987) , so I have 30 years of "field data" on this subject.

    Even using my first IBM clone in '87 as the starting point (which had 640k of RAM and a 32MB hard drive), by multiplying everything by two every 18 months I get 10GB of RAM today and a 1TB hard drive. DAMN CLOSE!!!! Just a little too much RAM and a little less HD than what sits on my desk today.

    Considering that this "law" was obviously intended as a general expectation of the exponential growth of computer power into the future, I was frankly shocked at how accurate it was over essentially three decades. If only "civilian space travel", "personal robots" and "hover cars" had seen similar exponential growth. Pity.

    But from a STRICTLY user's perspective, Moore's Law seems to be holding fast FOR NOW.


    moderator condenses multiple answers:

    Although Moore's law explicitly deals with the number of transistors in a microchip, this is but ONE SINGLE benchmark in a much, much larger world of technologies advancing at an exponential rate.

    To get hung-up on clock-speeds misses the point. One only need look at PassMark CPU benchmarks: http://www.cpubenchmark.net/high_end_cpus.html, to see that computers are getting VASTLY more powerful EVERY DAY.

    The number of transistors on a chip is simply one component in enhancing today's computer power.

    Though I am not Moore nor do I know him, I'm guessing that in a broader sense his law was an attempt to predict the exponential increase in computing power. He choose "number of transistors on a chip" as a CONCRETE and most important, QUANTIFIABLE yardstick as opposed to a much more "ambiguous and difficult to prove" assertion that "computer power will double every couple of years". To prove his theory, clearly something that could be easily measured was needed as the yardstick. But I will go out on a limb here and suggest he was predicting a larger trend dealing with EVERY aspect of computers.

  • ubiquibacon

    We can still make processors go faster with silicon (but not too much faster), but at this point it is cheaper / more efficient to just make processors (or their cores) smaller, and stuff more of them onto a die. Newer materials such as graphene blow silicon out of the water in terms of transistor switching speed, but we have yet to master the manufacturing process. Be patient, more speed will come, probably sooner than later.