cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
562
Views
0
Helpful
7
Replies

Hardware and Software clocks (NTP)

Mitrixsen
Level 1
Level 1

Hello, everyone.

I am studying NTP for my ENCOR exam and I have a question regarding the hardware and software clocks. From my understanding, our device (like a Cisco Router) will have a hardware clock which is basically a chip powered by a battery. This preserves time even if the device loses power.

Then there is also the software clock? So this isn't an actual hardware component but rather something maintained by the OS which is used for purposes such as NTP. I also understand that the software clock is initialized by the hardware clock in case of a reboot, etc.

My question is, why exactly are there two clocks? Would it not be possible to have just the hardware clock and have that one synchronize with NTP? Why are there 2 clocks exactly?

Thank you.

David

7 Replies 7

Joseph W. Doherty
Hall of Fame
Hall of Fame

For a computer based running digital device, its functioning depends on a "clock", which isn't really for keeping time, but for synchronization.  (You're probably aware signals on network media are interpreted based on timing - for example, look "now" at signal voltage, on or off, to determine bit on or off - much the same for everything digital.)

A computer based running digital device's "clock" is constantly "ticking".  As a side benefit, the device can count the ticks, so it can, if tick frequency known, compute (software) an elasped time.  If we provide the device a real world "timestamp", it can function as a real world "clock".

Originally, computer based digital devices, when powered on, were given the date/time, manually entered.  Later, a battery power "clock" was added, so if device was power cycled, or rebooted, it didn't need to be manually configured with date/time.

So, why two clocks?  Because one is designed for device operation not just ticking for real world time keeping, while the other is designed for that purpose, including when device is unpowered.

Could you actually design hardware to do both?  I'm sure you could but I expect at a higher cost, capitally and/or operationally.

Regarding one or both "clocks" and NTP, on many Cisco platforms can be used to keep both synchronized.  (In the past [still?] the battery clock required a configuration command to keep it synced with NTP.)

M02@rt37
VIP
VIP

Hello David,

Cisco routers and switches have both a hardware clock (also called the calendar clock) and a software clock because each serves a distinct purpose in maintaining accurate time across different conditions.

The hardware clock is a physical chip on the device, powered by a battery (Real-Time Clock or RTC). It keeps track of the time even when the device is powered off or rebooted. However, hardware clocks tend to drift over time and are not designed for high-precision timekeeping, especially in network synchronization scenarios.

The software clock, on the other hand, is maintained by the operating system and is the primary clock used for network operations, logging, and NTP synchronization. The software clock starts by taking the initial time from the hardware clock when the device boots up, but it can then be updated dynamically using NTP or manual configuration.

The reason there are two clocks is that the hardware clock alone is not sufficient for accurate timekeeping in a network environment. The software clock allows for real-time adjustments and high-precision synchronization with external time sources via NTP. If Cisco devices relied only on the hardware clock, timekeeping would be less accurate due to drift, and they would not be able to dynamically synchronize with other network devices.

=> Having both clocks ensures that time is preserved across reboots (using the hardware clock) while allowing for precise, real-time synchronization (using the software clock) when the device is operational...

Also, you should next learn about PTP ! Good Luck.

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

As a side note, the battery powered clock also does "drift", (historically) often (and even possibly much) worse then the non battery clock.

As a further side note, I've always wondered (even before battery clocks, GPS, NTP, etc.) why computer based digital devices didn't use their power source to maintain a real world clock with about zero drift.  (At least in my country, the AC cycle rate is very, very consistent.  Except for DST changes, how often do you have to reset even the least expensive AC powered clock?)

As a trivia side note, for time keeping, a drifting clock isn't an issue, how consistent its drifting is, is a very important issue.

Regarding PTP, it's sort of NTP on steroids, but if you really, rally want the best time matching between devices, you may need to resort to OOB time signals.

You're absolutely right—hardware clocks do drift, sometimes even more than software clocks. The drift is usually due to temperature fluctuations, aging components, and the quality of the oscillator used. That’s one reason why networked devices rely on external time sources like NTP or PTP for accurate synchronization.

Regarding power sources for timekeeping, it's an interesting observation that AC-powered clocks maintain excellent accuracy due to the stability of the power grid’s frequency (50 Hz or 60 Hz, depending on the country). Historically, some computer-based systems did use power line frequency for timekeeping, but as devices became more mobile (laptops, embedded systems, etc.), they needed an internal oscillator that wouldn't depend on an external AC source. That, combined with variations in power grid frequency in different regions, led to the reliance on internal clocks.

For precision timekeeping, drift itself isn't necessarily the biggest problem—consistency in drift is key. If a clock drifts predictably, you can compensate for it. But if drift varies unpredictably due to temperature changes or other environmental factors, synchronization becomes more challenging.

PTP is indeed like "NTP on steroids," offering sub-microsecond accuracy. But for extreme precision, such as in financial trading, scientific research, or defense applications, Oob time sources like GPS, atomic clocks, or dedicated time distribution networks become necessary. In such cases, even PTP might not be enough without disciplined oscillators or rubidium clocks to maintain precision over time.

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.

"Historically, some computer-based systems did use power line frequency for timekeeping, . . ."

I don't recall any.  Do you have any examples you can name?  (I'm not doubting there have been some, but my observation goes back almost 50 years, so possible not a common approach?)

". . . but as devices became more mobile (laptops, embedded systems, etc.), they needed an internal oscillator that wouldn't depend on an external AC source."

Again, my understanding, digital computer systems, generally all have a master clock, not for keeping time as we think of keeping time, but for synchronization or hardware functions (there's a reason computer hardware usually has some Hertz value, often supported as a fraction of the master clock - ever hear about "overclocking" a PC?, originally that was running the master clock faster, and can create a whole pascal of issues).  The software time clock, I believe, is also derived from the master clock.  I.e. shouldn't matter whether powered by AC or a battery.

I believe, internally, digital computer based devices use DC, so using AC for a time clock would likely be more costly, plus believe it's easier to subdivide a higher clock rate than multiple a clock rate.

"That, combined with variations in power grid frequency in different regions, led to the reliance on internal clocks."

No doubt, variations in various aspects of the local AC is a good reason to just not bother at all, using it to maintain a computer based time clock, just suspect it's more likely due to cost vs. benefit.

BTW, when discussing computer based clock drift, we're likely in the realm of under a second a day.  But, over weeks, months, years, it adds up.  Plus, as M02@rt37 very much correctly described, some of the multiple factors that can lead to constantly variable drift rates, which makes self correction unreliable.

Laugh, also when something like NTP was created, it was better then going device to device, to try to get all the devices with synchronized time, based on calling the telephone company's dedicated current time phone number (don't know if that even still exists - but multiple decades ago I found it was often several seconds off [according to my TimeKube and WWVB]).

...Do you have any examples you can name?  (I'm not doubting there have been some, but my observation goes back almost 50 years, so possible not a common approach?)...

Some old arcade machines and gaming consoles (early atari systems) relied on the power line frequency for timing certain game logic, especially in NTSC/PAL-based video synchronization.

Also, since drift is not always linear, some corrections are harder to predict, especially with temperature fluctuations, aging components, and power variations affecting quartz oscillators. That’s why GPS and atomic clock synchronization are used where ultra-precise timing is required...

 

Best regards
.ı|ı.ı|ı. If This Helps, Please Rate .ı|ı.ı|ı.


M02@rt37 wrote:

Some old arcade machines and gaming consoles (early atari systems) relied on the power line frequency for timing certain game logic, especially in NTSC/PAL-based video synchronization.

Interesting, like the Atari 2600 console?

I didn't have an Atari console, but along the way, I did have an Atari 400 and 800XL.  I don't recall the latter two using AC for timing, especially as both had (I recall?) a separate brick power supply that converted the AC to DC.

As a side note, the Atari 400/800 series were very, very interesting from a hardware perspective.  For several years, how those Atari home microcomputers could effectively run Star Raiders was a proprietary secret, later is was revealed those microcomputers had a GPU in them (remember, this was '70s tech, and main CPU was a 2 MHz 6502).  The architect of those microcomputers was also the architect of the Amiga microcomputer (same philosophical design, just 10 year newer hardware tech).