Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
FreeBSD Support for Leap Seconds (freebsd.org)
58 points by tachion on June 27, 2015 | hide | past | favorite | 33 comments


Take this as your reminder to participate in lobbying against leap seconds. They serve no useful purpose: The drift between civil noon and solar time they prevent would take thousands of years to reach even 1 hour. The thing leap seconds are tracking "mean solar time" is not something that exists, it's a mathematical abstraction; actual solar time differs substantially from mean solar time depending on your location and the time of year. Applications that care about solar time are usually frustrated by leapseconds in any case; since they result in unpredictable discontinuities in the difference between solar time and civil time; and most systems provide no way to report the history of leap seconds to applications.

If people still care about solar noon matching civil noon 4000 years from now they simply need shift the timezones by an hour. This is a much safer and more reasonable change which could be planned a hundred years in advance and wouldnt require a constant feed of unpredictable "trusted" leap seconds. Systems already handle mixed timezones well and will continue to do so since computers exist worldwide.

As more distributed systems become sensitive to second-scale timing and the cost of reliable oscillators that can keep time autonomously continues to decline the cost of leap-seconds will continue to increase; but the value they provide (primarily political bragging rights about Greenwich being part of the definition of civil time) will remain at practically zero for almost everyone.

Workarounds like leap-smear are just that-- workarounds, and add more possible failure modes to leap-second handling and will cause different failure modes for applications that care about time in different ways, e.g. caring about frequency vs time; or just from having even more ways to handle the leap second.

Moreover, the discontinuation of leapseconds is simple and nearly costless. Announce that no more will be issued; then don't issue anymore of them.


In my opinion leap seconds shouldn't exist in software. Time should be continuous from some point of time (1970-01-01 works fine) and leap seconds should only account when software performs formatting (or parsing) of date-time.


That's what TAI is for: https://en.wikipedia.org/wiki/International_Atomic_Time

The problem is not everyone uses NTP, and not everyone has network connectivity. So we have to support folks inputting the time manually. Trying to go from that backwards to TAI (possibly with an incomplete/nonexistent table of leap seconds) makes for some rather messy logic.


Even more than that: some real-world infrastructure does depend on time being synced to the rotation of the Earth. I work in astronomy, and taking leap seconds into account is extremely important for precisely pointing telescopes.


I still think that this complexity must stay where it matters and it must be removed from majority of software. There's a lot of broken code working (or not working) with timezones. Leap seconds are much more subtle thing that most of developers probably didn't even hear. Date/time infrastructure is complex enough and any simplifications will help.


Not just that, but NTP has no TAI support. So even if you'd like your own systems to run without leap seconds, and don't mind the offset with UTC, you're basically on your own-- there is no nice setting you can flip.


It is possible to run GPS time-synchronized NTP servers (GPS time is a fixed offset from TAI). You must however make very sure your clients do not confuse your own NTP servers with any public UTC-based ones. http://www.ucolick.org/~sla/leapsecs/right+gps.html


I like that they clearly describe how FreeBSD handles leap seconds, but when I read the headline I was hoping they were announcing support for leap smearing.


I was hoping for the announcement "we are ditching the obsolete POSIX standard that specifies 1 day must consist of exactly 86400 seconds".


Apologies for the OT, but I read "leap smearing" and immediately thought "that should be an entry in a critical failure table for parkour skill rolls"


I'm just imagining the subtle bugs that can happen if the developer can't assume a minute is 60 seconds and that each minute is the same length.

This seems like a feature for no one, as the people who really care about leap seconds are probably doing their own timekeeping already since computer clocks aren't exactly accurate to being with.


This is not really a problem. Some thoughts:

1. Time is never the same on two different computers.

2. Time is different when you send it and receive it (speed of light).

3. Time isn't necessarily linear (time dilation).

4. Clocks drift, even with NTP.

5. Some systems are isolated enough to accept inaccurate human input and time synchronisation.

Time is an invention for human consumption mainly. Machines are better served with explicit synchronisation (paxos, distributed transactions) or not at all (eventual consistency). Examples in brackets.

Time, if we lose a few seconds here and there, meh, don't sweat it.


Machines are better served with explicit synchronisation (paxos, distributed transactions) or not at all (eventual consistency)

Using atomic clocks to sync distributed systems is actually done in practice.

http://static.googleusercontent.com/media/research.google.co...


Utterly insane IMHO. If there's a single error in any temporal data, what happens?

Even relatively reliable frequency/time standards (HP/Agilent come to mind) aren't 100% available.


If the leap second was really just expressed to the application as the 61st second of a minute, it wouldn't be so bad. But instead the second of 23:59:59 happens _twice_ (UNIX time steps backwards one second), which tends to confuse applications that assume time increases monotonously.


Could someone explain to me why the leap seconds can't simply be inserted on the leap day in February? Never mind, that just makes way too much sense I assume :) Just like not having daylight savings time, or 30 min offset time zones, or months with either 31 or 30 days, mostly arrange in a random order.

My ideal time system would have 13 months, with 12 30 days months plus 1 end of the year month with either 5 or 6 days (plus an occasional leap second). An no damn daylight savings time. Of course, it would make a lot of sense to put the beginning of the year at the beginning of the first month of spring, not the _second_ month of winter. But again, nothing about modern calendar makes much sense. You begin to realize how insane it is only when you are forced to write a time library from scratch in C++ as part of a C++ course. I did that in high school, and it was really eye opening.


Because leap seconds aren't inserted on a regular, predictable schedule. They're inserted irregularly, based on astronomical observations, to account for irregularities in the Earth's rotation.


That doesn't seem to be the case, at least for UTC. (Do scientists use a special calendar that reflects Leap Seconds more quickly?)

http://datacenter.iers.org/eop/-/somos/5Rgv/getTX/16/bulleti...

"Leap seconds can be introduced in UTC at the end of the months of December or June, depending on the evolution of UT1-TAI. Bulletin C is mailed every six months, either to announce a time step in UTC or to confirm that there will be no time step at the next possible date."

(That said, saving them up until the next Leap Year would mean a maximum discovery-to-implementation interval of 8 years, rather than the current 1 year max.)


I meant "irregularly" inasmuch as the presence or absence of a leap second doesn't follow a strict pattern -- it's not "regular" in the way that leap years are.


Sounds like you would have been a good fit for working at Kodak https://en.wikipedia.org/wiki/International_Fixed_Calendar


A good lesson on why it's almost always a bad idea to write your own time-handling code, or to represent time as an integer instead of a proper date/time datatype in whatever language or database you're using.


Why is repeating the :59 second better than adding a :60 second?


Because having tv_sec = 60 during leap seconds is a pretty crazy special case, and it's likely to trigger bugs in applications. There's no other situation where that'll ever happen. Having tv_sec = 59 happen twice is considerably less dangerous.


I have never understood why leap seconds are not phased in over a specific time period rather the current method of adding the entire adjustment at once.


By making some seconds longer than others? Kind of the opposite of precision timekeeping. Changing the size of the unit of measure is a bad idea compared to changing the number of things measured.


Google shared some details about how they do leap smearing and the adjustments were smeared over a long enough time to be small enough to not affect their applications. They've been doing this for a while with no ill effects AFAIK. I'm aware of other people patching ntpd to do similar stuff as well.

If you can get the smear to be gentle enough to not affect your apps, it's a nice workaround for the general problem. There's just so many components running inside a modern infrastructure, and it's hard to get them all to behave correctly during a traditional 61-second-minute leap second transition.

And that 2012 leap second linux kernel bug was pretty epic. AFAIK, people who are smearing didn't experience it.

http://googleblog.blogspot.com/2011/09/time-technology-and-l...


Smearing is not suitable for all applications, however. For example, anything requiring sub-microsecond precision (i.e. data acquisition) would require at least a 12-day smear to not fail its timing requirements.

Such precison is well outside the realm of NTP but is not really a problem for PTP. I've worked on Linux systems that have done this; ~100ns synchronization was not uncommmon.

(PTP, incidentally, got it right and specifies that its timescale is based on TAI, so it is continuous.)


chrony has support for leap second smearing too. About to turn it on at work to avoid crashing stuff in the lab.

The amount of stuff that still runs an older Linux kernel is insane...


What kind of lab?


Large ISP.


On all hardware components, some seconds are already longer than others. Computer clocks are not accurate. They need to synchronize and the clock skew and the following synchronization cause seconds to have varying duration.


That is exactly how it was done during the 1960'ies: The duration of atomic seconds varied from year to year and it proved utterly unworkable for everybody.


Some NTP servers are configured to do precisely that, in order to avoid the problem entirely in systems where leap seconds might be problematic.

For example, AWS does this for their management console: https://aws.amazon.com/blogs/aws/look-before-you-leap-the-co...

Google also does something similar: http://googleblog.blogspot.com/2011/09/time-technology-and-l...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: