Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks developed back in 1985.
NTP provides Coordinated Universal Time (UTC) including scheduled leap second adjustments. No information about time zones or daylight saving time is transmitted; this information is outside its scope and must be obtained separately.
It is usually able to maintain time to within tens of milliseconds over the public Internet and can achieve 1 millisecond accuracy on local area networks under ideal conditions.
The protocol uses the UDP on port number 123.
NTP uses a hierarchical system with levels of clock sources. Each level of this hierarchy is termed a stratum and is assigned a layer number starting with 0 (zero) at the top. The stratum level defines its distance from the reference clock and exists to prevent cyclical dependencies in the hierarchy. It is important to note that the stratum is not an indication of quality or reliability.
Stratum 0 – These are devices such as atomic (cesium, rubidium) clocks, GPS clocks or other radio clocks. Stratum-0 devices are traditionally not attached to the network; instead they are locally connected to computers (e.g., via an RS-232 connection using a pulse per second signal).
Stratum 1 – These are computers attached to Stratum 0 devices. Normally they act as servers for timing requests from Stratum 2 servers via NTP. These computers are also referred to as time servers.
Stratum 2 – These are computers that send NTP requests to Stratum 1 servers. Normally a Stratum 2 computer will reference a number of Stratum 1 servers and use the NTP algorithm to gather the best data sample, dropping any Stratum 1 servers that seem obviously wrong. Stratum 2 computers will peer with other Stratum 2 computers to provide more stable and robust time for all devices in the peer group. Stratum 2 computers normally act as servers for Stratum 3 NTP requests.
Stratum 3 – These computers employ exactly the same algorithms for peering and data sampling as Stratum 2, and can themselves act as servers for stratum 4 computers, and so on.
While NTP (depending on what version of NTP protocol in use) supports up to 256 strata, only the first 16 are employed and any device at Stratum 16 is considered to be unsynchronised.
The NTP protocol supports up to 256 strata although only the first 16 are employed and any device at Stratum 16 is considered to be unsynchronised.
How does the protocol work? (via stackoverflow)
A 64bit NTP timestamps are stored as seconds since January 1, 1900. 32 bits for the number of seconds, and 32 bits for the fractions of a second. The first rollover will be in 2036. Future versions of NTP are expected to use a 128bit timestamp which would not suffer from the rollover problem.
The client stores the timestamp (A) (remember all these values are in seconds) when it sends the request. The server sends a reply consisting of the “true” time when it received the packet (X) and the “true” time it will transmit the packet (Y). The client will receive that packet and log the time when it received it (B).
NTP assumes that the time spent on the network is the same for sending and receiving. Over enough intervals over sane networks, it should average out to be so. We know that the total transit time from sending the request to receiving the response was B-A seconds. We want to remove the time that the server spent processing the request (Y-X), leaving only the network traversal time, so that’s B-A-(Y-X). Since we’re assuming the network traversal time is symmetric, the amount of time it took the response to get from the server to the client is [B-A-(Y-X)]/2. So we know that the server sent its response at time Y, and it took us [B-A-(Y-X)]/2 seconds for that response to get to us.
So the true time when we received the response is Y+[B-A-(Y-X)]/2 seconds. And that’s how NTP works.
Example (in whole seconds to make the math easy):
- Client sends request at “wrong” time 100. A=100.
- Server receives request at “true” time 150. X=150.
- The server is slow, so it doesn’t send out the response until “true” time 160. Y=160.
- The client receives the request at “wrong” time 120. B=120.
- Client determines the time spend on the network is B-A-(Y-X)=120-100-(160-150)=10 seconds
- Client assumes the amount of time it took for the response to get from the server to the client is 10/2=5 seconds.
- Client adds that time to the “true” time when the server sent the response to estimate that it received the response at “true” time 165 seconds.
- Client now knows that it needs to add 45 seconds to its clock.
That is how a basic client would calculate what time it is.
High stratum servers use Marzullo’s algorithm to get really accurate time using several sources and averaging them.