linux - why "date -u" command shows time 25 seconds ahead "export TZ=UTC; date"?

05
2014-04
  • Questioner

    I am running clockspeed-0.62 (DJB software) on ubuntu 13.10. I have setup timezones so that /usr/share/zoneinfo is a link to /usr/share/right, and I have setup an up to date /etc/leapsecs.dat file, ie 25 leapsonds.

    Could someone help me to understand (and to solve) why date -u command shows time 25 seconds ahead export TZ=UTC; date?

    root@ubuntu:~# date; (export TZ=UTC; date); date -u
    Mon Feb  3 22:33:56 CET 2014
    Mon Feb  3 21:33:56 UTC 2014
    Mon Feb  3 21:34:21 UTC 2014
    
  • Answers
    Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.

    Related Question

    performance - Why are there different clockspeeds and timings on RAM?
  • Eikern

    I don't consider myself a novice when it comes to building computers or computer hardware in general, but I've never taken the time to fully understand RAM.

    Can somebody tell me why there is a need for different clockspeeds when it comes to RAM?
    And what the timings are good for.

    Thanks


  • Related Answers
  • Axxmasterr

    There is a very simple way to demonstrate the timing of memory in very practical terms everyone will understand. The Megahertz and Gigahertz of clock speeds and bus speeds can seem a bit opaque if you do not have an electronics background.

    The first thing to consider is the actual clock speed. The clock speed is effectively the number of times per second the computer can conduct an operation. The operations are usually read or write in the case of memory. The clock speed and synchronization is needed so all of the electronic components know when to listen for the electrical signal that represents a 1 or a 0. If either side is early or later talking or listening then there is a high likelihood of an error determining the correct state of the bit in memory.

    Second lets abstract this as if it were a phone call. Imagine we are both on a phone connected directly to each other. We have a metronome that is clicking once every five seconds and every time it clicks we take turns speaking. We are exchanging information back and forth. We express the information in a predetermined way which is to scream over the line when the metronome clicks to represent a 1 in memory and silence to represent a zero.

    Now that the example is laid out I can use this to demonstrate a few things about the way ram functions. The protocol in this example is we take turns every time the metronome clicks. If either of us miss one of the clicks of the metronome we find ourselves out of sync. Synchronization errors are effectively expressed when the two of us are not talking and listening at the right moments. If you start listening at just a milisecond after I stopped yelling, then you would eroneously interpret that to be a 0 state. They call this jitter. The worse the two sides get out of sync, the more pronounced the number of state determination errors will occur.

    The clock speed is needed to allow the motherboard and the memory to correctly exchange state information with one another. The clock speed of the memory is more or less equal to the speed at which it is capable of reading/writing data to RAM.

    The reason there is such a variation in speeds of memory modules is because over the last several years, materials science has developed lower power memory that is capable of maintaining a greater number of reliable state interrogation points per second effectively making the memory faster. The time it takes the electrical signal in the wire to go from a complete 0 to a complete 1 is called transient time (Also referred to as low and high states) When reading/writing memory, the closer the read/write is to the clock sync pulse, the more likely it is the read/write will be successful. The closer it is to the mid point between clock pulses, the more likely that the read/write will be unsuccessful.

    Most average users do not get into the nitty gritty details like this, but if you are brave and have designs to overclock a computer or crank up the bus speed then you probably care much more about this sort of thing. You can often times get greater speed out of electronics but the side effect is more heat and more errors. The heat is a function of the increase in the number of operations taking place and the errors are usually directly related to the particular performance characteristics of the semiconductor material in the memory. The speed rating of memory is more or less just performance metric the memory is designed to achieve with a acceptable amount of read/write errors.

    I hope this answers your question....

  • Tall Jeff

    Your question seems to ask why there are different speed grades of memory available. As in, why wouldn't there just be one speed -> the fastest. Also, perhaps related is "why do the faster speed grades cost more, because I can overclock the slower stuff and it really is the same chip, right!?"

    One of the other answers painted the reasoning behind this as strictly "marketing". This is part of it, perhaps, but there are solid technical / physics reasons for this as well.

    Here the deal: When semiconductor devices are made, there is actually a tremendous amount of variability in the whole process. That is, even though the whole process is the same for each wafer run of devices, each individual part comes out somewhat different. Not only do some work and some not work, but also some will ultimately work to different levels of performance based on voltage, temperature, power usage, clock speed, etc.

    After a few wafer runs of a given type of part are made, the semiconductor vendor will have a notion of what their yield curve looks like to various sets of test conditions. They then use a statistical analysis to define a set of performance bins that each individual part complies with.....in effect the slow and the faster speed bins. For parts made in large volumes, there are usually several different possible bins and many possible combinations of test conditions the chips are labeled to comply with.

    So for memory parts, a given device may comply under all test conditions at 600Mhz, but not at 700Mhz, so the part goes into the 600Mhz bin. A part that complies with everything at 700Mhz, but not at 800Mhz, goes in the 700Mhz bin, etc.

    This all conforms to a distribution curve and you can see for progressively higher speed bins, less and less parts are going to comply with the tighter specs of the higher speeds. In effect, the higher speed parts are more scarce, therefore they can command a higher price for the people that really want them. Conversely, you can see that they can sell slower parts a lower cost because they are in effect easier to make.

    Summarizing: In the end, this comes down to the variability in the manufacturing process, statistics and some basic economics of supply and demand.

  • JP Alioto

    Here is a good article on RAM timing and a good memory performance guide.

  • Pyrolistical

    Marketing.

    If you look at the benchmarks, playing double for your ram to get great timings gives you less than a 1-5% performance increase.

    Just buy cheap, but quality ram and save a lot of money.