Unix time is not unambiguous. It is linked to UTC, so occasionally there are leap seconds, and there isn’t a standard format for these (two seconds == one second, or skip a second)[1]. TAI is a better format, but turns out the common TAI libraries just code the TAI offset from Unix time.
For nearly all web-based apps (i.e., most REST API clients), missing a single second (either permanently, or for a period of time until NTP catches up the system clock) is so trivial as to be meaningless. Even things like action games or TOTP won't mind about a single second.
The only place where I can envision this being a potential problem might be with stock trading, but HFT or other low-latency trading wouldn't be done through a REST API or web browser anyway.
Meanwhile, with UNIX time, you gain a mostly unambiguous, cross-platform way of dealing with dates/times with just an integer, easy math and comparisons (such as: this integer timestamp is > another, so it occurred after), and zero parsing or storage of text strings. That is a huge win in practical API design, especially when dealing with thousands or millions of events, and using an integer datatype in a database instead of a text field is a significant win, too, in terms of storage and indexing.
With that said, standards are nice, especially when dealing with an API that might talk to many different types of clients, but a text-based format like ISO-8601 probably isn't an optimal solution for most use cases.
No. Like I said, that would make way too much sense.
The hardware RTC on your motherboard is a stopwatch. When a leap second occurs, your system time will be set 1 second backwards which is also 1 second less than what the RTC reports. Then, the new fuckered time will be written to the RTC so it stops acting as a stopwatch and instead tracks unix time.
An application that reads the system unix time 4 times a second (using a monotonic clock for the 250ms delay) and prints it will observe a negative duration in unix time as seen in my previous comment.
> to express it in UTC you need to factor in UTC's quirks such as its leap seconds
Oh that would make unix time so much more useful. But no, the UTC quirk is incorporated into unix time the instant a leap second happen as if some nutjob logged into all your servers and changed each clock manually.
I'm not really following. How does the fact that Unix timestamps are in terms of UTC affect whether leap seconds get incorporated into the timestamp or not? Do leap seconds work differently in UTC or something?
> There are not alway 86400s in 1 day. Midnight is not a whole number of multiples of 86400 after Jan 1st 1970.
Isn't that an issue with the format you want to convert UNIX time to ?
I mean, UNIX time is quite clearly and unambiguously defined as the number of seconds that passed since January 1st 1970. How are UTC's leap seconds any relevant?
Specifically unix timestamp is (number of days since 1970-01-01 × 86400) + (seconds since midnight). The difference arises from the fact that in utc some days are 86401 seconds long.
Another way of viewing it is that unix timestamp is seconds since 1970-01-01 minus number of leap seconds that have passed.
> Do leap seconds work differently in UTC or something?
Leap seconds exist in UTC, that's the entire thing.
A leap second getting inserted (which has always been the case so far, though it may eventually change) means a UTC day ends on 23:59:60, which as far as UTC is concerned is perfectly fine, nothing wrong with a minute having 61 seconds.
UNIX time though? UNIX time is (86400 * days since epoch) + seconds since midnight.
23:59:60 is 86400, so on the last second of day X you're at 86400 * X + 86400, and on the next second (the first of day X+1) you're at 86400 * (X+1) + 0.
Which is the same value. And thus one second repeats.
> If a leap second is to be inserted, then in most Unix-like systems the OS kernel just steps the time back by 1 second at the beginning of the leap second, so the last second of the UTC day is repeated and thus duplicate timestamps can occur.
So certain unix timestamp can represent two different points in time, the last normal second of the day or the leap second that happened after it. This makes (certain) unix timestamps ambiguous.
UNIX time is not actually a stopwatch, it's defined as 86400 seconds per day elapsed since epoch, plus the number of seconds elapsed since the last midnight (in essence it matches UTC seconds from 1970).
Thus UNIX time is at least 27 seconds late (because there have been 27 seconds since 1972).
Because of leap seconds, Unix time is also only valid for leap-seconds that the program knows about: in effect, only dates set in the past before the program (or system) was last updated to take new leap seconds into account.
If you need to translate between Unix time and UTC (or something geodesic like UT1), you'll be at most one second off. The difference due to leap seconds only accumulates if you need to translate something civil or geodesic (UTC, UT1, Unix time) to TAI.
also, unixtime just describes the time till the start of measurement of the time. it does not describe the organisatorial/socetial defined time which i currently "experience". like, leap years, timezones, sommertime or not, the new global dictator which will emerge in 500 years which defines the time beginning with his birth
[1] https://geometrian.com/programming/reference/timestds/index....