Preemptive analysis, consulting, and planning

Setting your network's clocks from the Internet:
How safe is that?

Posted July 12, 2018 by Marc Abel

You're probably near at least one device which shows the time in a small area of its screen. The smallness of this display can mislead us to think that its correctness is a small detail. And sometimes that's true; the elder President Bush once said in a commencement address that he hadn't figured out how to get his VCR to stop flashing 12:00. But the time you see in the corner connects with a lot more than giving you a convenient place to check if dinner should be ready.

Attackers have been mis-setting computer clocks to facilitate break-ins for decades. During the Reagan presidency, someone I knew used a hardcoded password to break into a DEC VAX at a Big Ten research university. He was thrilled with his new administrative privileges, but was having a problem because the compromised password was due to expire in three days. Unable to extend his access, he worked around the issue by setting the system clock back exactly one year, hoping the change would go unnoticed for a time. He succeeded.

Today's systems are so trusting of their time accuracy, that irregularities can cause wide-reaching problems. Access to systems can be granted or revoked, claims paid or denied, air schedules changed, ships mispositioned, intrusion logs confused, backups corrupted, and records falsified on the basis of what a clock might be set to. Most applications don't even bother checking with the host operating system whether the clock has been synchronized at all, even though that information is readily available. An attacker can even forge someone's identity by setting a clock back to 2014, a time when thousands of defective SSL certificates were honored as a result of the Debian Heartbleed vulnerability.

If an attacker could mis-adjust the clocks by 0.0083 seconds at half of the electric generating plants in the United States, she can short out most of the grid.

Let's not worry about the electrical grid at the moment, because it's my belief that the utilities are more than typically careful with their time regulation. I've been in more than one utility's control center, and my presence wasn't for a tour. But what about our organizations, yours and mine? Here is what one usually finds:

  1. clocks not present
  2. clocks never set
  3. clocks not set since installation
  4. sometimes somebody sets some clocks
  5. clocks synchronized to a single Internet time server
  6. time derived from a few Internet time servers
  7. time derived from Global Positioning Satellites (GPS)
  8. on-site rubidium or cesium frequency standards

For some scenarios, the best clock security can be one of the first three choices, particularly if a strict leave all clocks alone policy is implemented to ensure time only moves forward. This doesn't easily facilitate recordkeeping, auditing, authentication, or coordination, but it works well for many toasters and dishwashers.

The fourth option is common for simple non-networked equipment, locations without ongoing Internet access, and configurations that unintentionally do not synchronize clocks. Far less common is for someone to choose this option for security reasons, although this is how I do things in-house for Wakefield. I'll say more about where this works well further on.

Most operating system vendors nudge you into using a specific Internet time source, and although there are a great number of operators to choose from, most operating systems provide their own online time sources to the world at no charge. By doing so, these vendors provide yet one more way to cause your machine to call home to them, giving them a little nugget of tracking and telemetry information under the guise of added value.

There are even third parties who hope you'll use their advertised time service instead, sometimes even incorporating purposeful deviations from internationally agreed standards of time, and recommending that you use their service exclusively. This does offer some technical advantages, but realize there are commercial motivations behind such freebies. (And don't do this if you supply power to the electric grid!)

Things get sophisticated at option 6, which uses Network Time Protocol (NTP) to meticulously compare a number of documented time sources. NTP has many checks, balances, computations, validations, and security features intended to produce a rock-steady, highly available time reference of known closeness to Universal Coordinate Time. Its security-hardened successor, NTPSec, is currently in beta testing.

On many operating systems, you can install and start NTP with a single command, and time servers will be selected at random from a pool of volunteers and periodically rotated. The question here becomes whether or not you know what is running. NTP has a complex array of options and features for a system that will typically run for years without adjustment. So a typical administrator is not going to be knowledgeable of most of these options, recent bugs, or new issues of the software. And the news gets worse from there; the reference implementation of NTP has 385,000 lines of code as of version 4.2.8, so there are mountain ranges of complexity for vulnerabilities to hide and security audits to get very expensive. NIST's National Vulnerability Database catalogs hundreds of NTP-related vulnerabilities. Notwithstanding these difficulties, NTP has been the primary method for online time handling for decades and will likely remain so.

An increasingly popular choice for accurate time is the GPS satellite constellation used for navigation. Organizations can choose between commodity receivers costing less than $50 up to commercial rack-mounted solutions with roof-mounted antennas, with the calculated time at the client's location within as few as two nanoseconds of the U.S. Naval Observatory's time. But the non-military portion of the GPS signal carries no authentication, so it's not conceptually difficult for an attacker to beam an incorrect time into your antenna. Here too your vendor has probably built in some safeguards, but you should be aware there is an attack surface.

In wartime the GPS constellation can be taken out by an adversary, and I can't rule out the possibility that a simple government shutdown would not impact service. Although critical infrastructure is supposed to be maintained during U.S. shutdowns, there have been some observations to the contrary within cybersecurity circles.

Item 8 is the ultimate set-and-forget alternative, where your organization purchases one or more atomic clocks. I don't mean the "atomic clocks" you have on your wall at home; those are ordinary clocks that set themselves from radio signals that emanate from real atomic clocks. What's meant here is the purchase of devices containing rubidium or cesium frequency standards. You set the time when you install it, and 1,000 years later it's off by less than two seconds (or even closer if you need better).

One reason real atomic clocks are widely used is because of their low cost. They can cost thousands more than a Big Mac, but you don't have to pay an administrator to unravel the uncertainties of Network Time Protocol, you don't need an outdoor antenna exposed to snowfall or hurricanes, and microwave cavities don't require security patches. And if your budget it really tight but you're electronically gifted, eBay can hook you up with a used rubidium oscillator for well under $200. All you have to do is build your own clock, as 14-year-old Ahmed Mohamed famously did while Barack Obama was in office.

I love measuring instruments and would have fun with a rubidium standard at Wakefield, but it turns out that option 4 is accurate enough and hopefully secure enough for use within this company. What you might not know is that although your computer clock does drift, many operating systems can automatically compensate for the drift once you measure it. The temperature at Wakefield's datacenter is maintained within a narrow range, so the CMOS clock of an ordinary Dell tower can operate with less than a tenth of a second of drift per day. I set the clock on this single machine, and let another half dozen systems set their own clocks from that.

Over time, I will be able to update this post to say how well Wakefield's clocks do as seasons change. In the meantime, there is a route for an outsider to infer how well these clocks are tracking. You are welcome to snoop on my clocks if you can figure out how.

Option 4 is not an all-or-nothing proposition: there exist very cheap oven-controlled crystal oscillators (OCXOs) that are far more accurate than commodity Dell PCs, without having to invest in rubidium frequency standards. Less than $50 will buy you a standard that's stable to better than one second per year, but you'll have to build a clock around it.

Options 5, 6, and 7 aren't all-or-nothing propositions either; a system that gets the current time from the Internet or satellite and then asks a human for permission to make a specific, one-time adjustment to your clock is an approach that can keep highly accurate time without opening your systems to potential adversaries.

Can we kill the leap second
without waiting for everyone else?

Posted July 19, 2018 by Marc Abel

2018 is the centennial year for the Standard Time Act, when the U.S. Congress largely adopted the time conventions our railroads had been using for 35 years. Standardized time is not strictly necessary; in fact precision timekeeping is mostly a human invention, because time dilation that arises from movement and gravity makes it impossible for time to behave uniformly for everyone.

Modern commercial, legal, and social needs make having a "pretty good enough" standard of time highly valuable. For most of us and for computer security in particular, better than "pretty good enough" time is not a good idea. Unfortunately, Congress has delegated day-to-day clock setting to an international consortium of astronomers and experimental physicists, who despite having noble intentions have lost sight of why most of us standardize time to begin with.

Time that is used to synchronize actual people needs to be simple and humane. So much for Daylight Saving Time. Leap years aren't completely simple and humane, but people love their years, and leap years can at least be calculated in advance. Leap seconds, on the other hand, cannot be predicted in advance, can cause unexpected problems with resource integrity and availability, and are ripe for abolishment.

The length of a second has been standardized in a few different ways over the last thousand years; its current definition is over 50 years old. Unfortunately, the 1967 definition of the second derives from the length of the average day during the 18th and 19th centuries. So our second is now the wrong size, but our problems only begin here. It turns out that the earth's rotation isn't as consistent or as predictable as our best clocks, so a process for having a "leap second" was formalized.

Leap seconds can be positive or negative; that is, some minutes could have 59 seconds, most minutes have 60 seconds, and some minutes have had 61 seconds. They are scheduled randomly with only about six months' advance notice, based on how the earth happens to have been moving recently. This tampering with the clocks and lives of all the world caused minimal harm in 1972, because people adjusted their clocks more frequently than the leap seconds happened at. But it is no longer 1972, and most of the clocks we look at keep themselves set on their own. No longer slipping into our lives quietly, leap seconds have become an intrusion.

Leap seconds have caused all kinds of havoc, messing up airline operations, financial markets, popular Internet services, navigation, and perhaps other critical infrastructure no one has confessed about. Programmers don't do a very good job implementing leap seconds right, and even when they do their portion right, the systems being interfaced with probably aren't compatible. Most operating systems have no native support for leap seconds. Only this week, Microsoft announced that Windows Server 2019 and Windows 10 will be getting leap second support, but remember my warning about programmers getting leap seconds right. Microsoft Excel has incorrectly treated 1900 as a leap year for three decades.

An international proposal to abolish leap seconds has been in the works since 2005, but consensus has been slow to come, with the next conference scheduled for 2023. But regardless of what happens then, many businesses can dispense with leap seconds right now, without waiting for the authorities to catch up.

You can look around and find lots of advice for dealing with leap seconds. Some folks largely ignore clock synchronization. Others "step the clock" at the scheduled time (or the wrong time, depending on how careful the implementer was), and hope nothing breaks. Google asks you to let them "smear" your leap second adjustments gradually across 20 hours, thereby avoiding time discontinuity problems at the cost of creating accuracy and synchronization problems. Unsurprisingly, Microsoft has condemned Google's method as not meeting regulatory requirements for some types of commerce. As all three of the schemes you would read on other websites have problems, I offer two more options with their own pros and cons.

Leap seconds happen when people try to reconcile two slightly conflicting notions of time. Universal Time so-called is abbreviated UT1 and measures time by the earth's rotation. UT1 is less steady than International Atomic Time, which is abbreviated TAI and is measured by averaging hundreds of atomic clocks together. TAI has the advantage that a second is a second is a second; in UT1 the duration of a second is not constant. The relationship between TAI and UT1 changes constantly as the earth's motions change. In recent decades, TAI ticks about 21 parts per billion faster on average than UT1.

Universal Coordinated Time, or UTC, is derived from TAI and has seconds that are all the same duration, but clocks are periodically adjusted to stay within 0.9 seconds of UT1. These adjustments are called leap seconds, something we want to avoid if our objective is security as measured by system availability and integrity. How can we do that? By rejecting UTC and choosing either TAI or UT1 to run our businesses on.

But wait, isn't that against the law?

It turns out that agreeing on what the law requires is harder than measuring time. UT1 is never more than a second different from UTC; at this writing UT1 leads UTC by about 0.07 seconds. Unless you're generating electricity for the national grid or doing something else really exacting, you're likely to get away with switching your clocks to UT1.

Switching to UT1 is easy, because for the past three years NIST actually provides a public Network Time Protocol (NTP) server that gives the time in UT1 instead of UTC. For more specifics, look up UT1 and NTP in a single search. But there is a risk to take into account: NIST has only one UT1 server, and so far as I know there isn't another one in the world. Authentication is not presently supported. You could set up redundant UT1 servers if you're simultaneously talented at coding, networking, and astronomy, or you can talk to NIST's advertised point of contact and explain the need for additional servers and authentication support.

If you would rather all your seconds are the same duration, you can switch to TAI. An important advantage of TAI is that you always know the exact span from now to any future time. Another advantage is that instead of having just one computer in the world to get your time from, you have hundreds of computers running NTP, GPS, longwave and shortwave radio, and other synchronization options.

The drawbacks to TAI are lack of existing software for synchronization, difficulty using the NTP protocol because it omits the cumulative number of leap seconds, and a permanent departure from UTC. Since 2017, TAI has been 37 seconds ahead of UTC, enough for some people to notice your clock is ahead of theirs. But can you get away with it legally? You probably can, especially if you mention TAI in fine print and know a good lawyer.

Do I have to buy a different clock to switch to UT1 or TAI?

Only if your clock sets itself and you can't reprogram it. If you set the clock or control the software that sets it, you're all set. TAI clocks run exactly the same speed as UTC clocks, except they don't jump around. UT1 clocks run slower than UTC, but unless you're using a really fancy clock (rubidium frequency standard or high-end oven controlled crystal oscillator), it won't be able to keep precise enough time on its own for the discrepancy to be measurable. You'll just periodically set your UT1 clock as you would any other clock.

This all sounds complex. Why go there?

My explanation is complex because what we do now is complex. Most businesses need only refer their master clock to a UT1 source, and all management, uncertainties, vulnerabilities, and risks of leap seconds go away forever. That's a one-minute task.


Wakefield Cybersecurity LLC
Wake secure℠