Categories
Technology

Counting the Seconds – A Look At How Computers Tell Time

As the United States prepares to fall back out of Daylight Savings Time, I feel it’s timely (no pun intended) to write about how the computers that we rely on can keep track of time. After all, I feel that between an old-fashioned clock on the wall and our smartphone, most would believe the smartphone to be more reliable and accurate. So how do they do it? First, let’s talk about how we, as humans, understand written time.

Human-Readable Time

Humans refer to time in many different ways. We change what we say based on what’s relevant. “A quarter till.” Out of context, this is not very helpful, but in a conversation, this might be all the information that is needed. “It’s 2:45.” Okay, but a.m. or p.m.? “It is 2:45 p.m.” Right, but what’s the date? And in what timezone? You can see where I’m going with this. I could say it’s November 6, 2022 at 2:45 p.m PDT with 35 seconds for good measure, but our species would be stuck in the stone ages if we spoke like this.

Humans are great at quickly picking up on context in conversation, but computers aren’t as fortunate.¹ They work great with quantifiable and objective values.

Naïve Solution

Heads up! This section contains some basic computer science concepts. If that's a bit too over your head, feel free to skip ahead to Oddities.

Okay, so computers will be precise with time. That’s all? Let’s try. If we were to very quickly come up with a solution to this problem, we might come up with something like this. Create a data structure consisting of integer variables for the month, day, year, hour, minute, and second. We’ll throw in a string variable for the timezone just to be safe. It’s a very minimal solution, but not unfamiliar to the human-readable formats that we use today. The issue is that our solution is for computers, not humans. There are a few glaring flaws. For one, because these are just integer variables strung together, there’s no validation of values. So there could be a June 31st, a negative thirteen month, or 61st minute.² Is the integer representation of the first month of the year 0 or 1? Most people would say one, but the Java programming language’s implementation uses zero.³

This solution leaves much to be desired, but before we look at a better solution, let’s look at some more design considerations.

Oddities

Something as important as a long-term solution for computers to reliably tell time must be durable and robust, capturing every possibility. Okay, but time is fairly straightforward, right? 24 hours in a day, 7 days in a week, and 365 days in a year. There are a lot of small edge cases in our understanding of time.

Let’s start with a fairly well-known one. There are not necessarily exactly 365 days in a year. Leap years occur roughly once every four years, allowing February to have 29 days and leaving us with a total of 366 days in a year.

So, when designing this solution, make sure that every four years is a leap year, right? Nope. There is a tiny stipulation about leap years with regard to years divisible by 100. Those years are not leap years and will have 365 days. So, we’ll keep that in mind as well. But wait, 2000 was a leap year, so what’s going on here? Okay, there’s one other stipulation. Years divisible by 100 are once again considered leap years if those years are also divisible by 400. So, 1900 and 2100 are not leap years, but 2000 is a leap year. Phew.

As you can see, there are a lot of considerations when designing a system of time, many of which will not be relevant until the turn of a century or millennium. And that’s one single rule that exists already. What happens if a new day of the week is created 100 years from now? It’s not an easy job. Here are a few other oddities that I found to be interesting.

  • There are not necessarily 86,400 seconds in a day. Leap seconds were introduced as recently as 1972, adding an additional second every so often.⁴
  • One second will not always be the same duration. In 2008, Google introduced the “leap smear,” which distributes a leap second evenly over the 86,400 seconds of that day.⁵
  • December 30, 2011 never occurred in Samoa. This is because the country swapped sides of the International Date Line to make business and trading easier with countries on that side of the globe.
  • Daylight Saving Time is not utilized in every region. In the regions that do utilize DST, they do not necessarily spring forward and fall back at the same time. Lebanon, for example, falls back a week before the United States does. In addition, regions in the Southern Hemisphere falls forward and springs back from our perspective.
  • Parts of Arizona do observe Daylight Saving Time. While most of Arizona sticks to Mountain Standard Time and Pacific Daylight Time year-round, the Navajo Nation in northern Arizona continues to observe DST.
  • Moving westward does not always mean turning back the clock. The Hopi Reservation, encapsulated by the previously mentioned Navajo Nation, does observe DST. During DST, this means you could exit from the west side of the Hopi Reservation and the time could change from 5 p.m. to 6 p.m.

Clearly, the nuances of time were not designed to be captured by a computer system⁶, and yet we find ourselves relying on them so heavily today. So, what’s the real solution?

Unix Time

Unix time is the generally accepted solution to our problem. It’s simply the number of seconds since January 1, 1970 at exactly 12:00 a.m. UTC (a.k.a. the Unix epoch). The first thing we should take away from this is that it is generally not human-readable. At the time of writing, it has been about 1.6 billion seconds since the Unix epoch. This solution has many advantages. Here are a few I see:

Advantages

First off, every value will have a corresponding, unambiguous point in time. This avoids the June 31st issue mentioned earlier.

Second, every value is distinct. 1:31 a.m. occurs twice in one day while falling back out of Daylight Saving Time, but each instance has a different value in Unix Time. This fixes a lot of strange time oddities.

Thirdly, timezones are built in. This is the number of seconds since midnight January 1, 1970 UTC. This means the number is an absolute point in time. A Unix timestamp of 0 refers to both January 1, 1970 12:00 a.m. UTC and December 31, 1969 4:00 p.m. PST. This is important in cases like these because it shows that date is often relevant when performing timezone changes.

Finally, localization is trivial with this format thanks to Unix time’s neutral written format. While in the States, we might write November 6, 2022, other countries may use a different format, like 6 November 2022. Each region can easily create a function to parse Unix time to its own format.

There are more benefits than I could ever list here, but what about drawbacks? These drawbacks aren’t limited to Unix time, but are more limitations of computer systems themselves.

Drawbacks

First, to properly keep track of seconds elapsed, a computer has to always keep track of time, even when the computer is off. Otherwise, you would have to set the clock every time your phone ran out of battery. A CMOS is the best solution to this problem. This is a small, battery-powered device that, among other things, keeps track of time. This remains on even while the computer is off.

Next, computers need space to store numbers. Larger numbers require more bits. Because Unix timestamps will grow larger as time goes on, eventually, we will need to increase the number of bits used to store timestamps. Not doing so would result in what’s called an overflow, resetting the value from the maximum value to the minimum value. Relatively recently, many pieces of software switched from 32-bit to 64-bit.⁷ This was because we would run out of space to store timestamps past the year 2038 with only 32-bit. 64-bit will give us enough space to store timestamps until the year 292277026596.⁸ Suffice it to say, we won’t have to worry about that limit for quite some time.

For the same reason, it’s also a bit difficult to reference historical dates in Unix time. Signed integers dedicate a bit to act as a negative sign, allowing for negative Unix timestamps. Without signed integers, we can’t reference dates before 1970 reliably. With a signed 32-bit integer, we can only reference as far back as about 1900. With a signed 64-bit integer, this becomes a non-issue.

The final issue is that some organization needs to maintain and update the translation from Unix timestamp to human-readable time that accommodates all of the oddities throughout history. There are various choices to select from, but the one set by your device’s manufacturer should be more than sufficient.

Conclusion

What never fails to amaze me is how much time and effort go into the most foundational parts of modern computers. It’s something we take for granted every day. Imagine having to manually set the time on a smartphone or Apple Watch due to a time change. Any modern device without the capability to tell time or automatically switch timezones would be ridiculed, but it took a long journey to create the system of time computers use today.


Notes and Citations

¹ I’m speaking from a foundational standpoint. There are examples of computers accommodating this way of speaking. One of my favorite examples is asking Siri to set a reminder for “tomorrow” just after midnight. Siri will ask whether you meant in the morning or the day after to clarify.

² Can you tell that I might be speaking from experience?

³ Java’s representation of date and time have changed drastically over the years, but the two main examples are the Date and Calendar objects, both use 0 as January. My point being is there’s enough ambiguity to the point where you would need to look up official documentation to be confident.

⁴ Leap seconds are added to account for small inconsistencies with the Earth’s rotation. As a result, they are added at the discretion of the International Earth Rotation and Reference Systems Service, and aren’t added at a regular schedule.

⁵ Google. (n.d.). Leap Smear. Google Developers. Retrieved November 5, 2022, from https://developers.google.com/time/smear

⁶ These are just oddities that I’ve noticed or learned over the years. For more interesting oddities, check out Infinite Undo’s “Falsehoods Programmers Believe About Time” and Computerphile’s “The Problem with Time & Timezones”.

⁷ The transition from 32-bit to 64-bit is not a single point in history, but rather a large gradient of change. Not all software projects will be transitioned to 64-bit, mostly due to smaller abandoned projects. Some modern operating systems have dropped support for 32-bit applications entirely, such as macOS Catalina and Android 12.

⁸ To see more overflows (like 128-bit and beyond), check out Centrinia’s Unix Time Overflow Gist.

Leave a Reply

Your email address will not be published. Required fields are marked *