Unix Timestamps: The Developer's Complete Guide
21 February, 2026 Backend
A Unix timestamp is one of the most fundamental concepts in software development, yet it hides several non-obvious pitfalls that cause real bugs in production. This guide covers everything - from what a timestamp actually is, to the 2038 overflow problem, to the subtle ways different languages handle negative values and leap seconds.
What Is a Unix Timestamp?
A Unix timestamp is the number of seconds elapsed since 1970-01-01 00:00:00 UTC, also called the Unix epoch. It is a single integer - no timezone, no calendar formatting, no ambiguity. At the time of writing this article, the current timestamp is around 1 740 000 000.
The format is universal across operating systems, programming languages, databases, and network protocols. A timestamp stored in one system can be interpreted correctly in any other system without conversion tables or locale settings. That is its entire value proposition: a single unambiguous integer that represents a moment in time.
You can convert this timestamp to a human-readable date or get the current timestamp using the tool on this site.
Why January 1, 1970?
The Unix epoch was chosen pragmatically, not symbolically. Unix was developed at Bell Labs in the late 1960s and early 1970s. When the engineers needed a reference point for time, they picked a date that was recent and round enough to be convenient for 32-bit arithmetic.
The original Unix systems used a 32-bit signed integer to store timestamps, which gives a range of roughly -2.1 billion to +2.1 billion seconds from the epoch - a span of about 136 years total, centred on 1970. Using a recent reference date meant that common timestamps of that era would be small, positive numbers fitting comfortably in a 32-bit register.
The specific date of January 1, 1970 was also influenced by earlier time systems at Bell Labs that counted days or seconds from similar reference points. It was partly convention, partly convenience, and entirely arbitrary from a mathematical standpoint.
Seconds vs Milliseconds vs Microseconds vs Nanoseconds
Different systems and languages return timestamps at different precisions. Mixing them up is one of the most common and hardest-to-debug timestamp bugs.
| Precision | Digits (current era) | Example value | Common in |
|---|---|---|---|
| Seconds | 10 digits | 1740000000 |
Unix time(), PHP time(), most databases |
| Milliseconds | 13 digits | 1740000000000 |
JavaScript Date.now(), Java System.currentTimeMillis() |
| Microseconds | 16 digits | 1740000000000000 |
Python time.time_ns() / 1000, PostgreSQL |
| Nanoseconds | 19 digits | 1740000000000000000 |
Go time.Now().UnixNano(), Java Instant.now() |
Quick identification rule: count the digits. A 10-digit number is seconds. A 13-digit number is milliseconds. Off by a factor of 1000 means off by one precision level.
The most common mistake is feeding a JavaScript millisecond timestamp into a function that expects seconds, or vice versa. The result is a date in the year 56000 or somewhere in January 1970 - both are obvious in hindsight but painful when they reach production.
// PHP: time() returns seconds, microtime returns microseconds or float
$seconds = time(); // 1740000000
$microseconds = microtime(true); // 1740000000.123456
// JavaScript: Date.now() returns milliseconds
const ms = Date.now(); // 1740000000000
// Converting JS timestamp for a PHP or SQL context:
const seconds = Math.floor(Date.now() / 1000);
The Timezone Problem
Unix timestamps are always UTC. This is not a convention - it is the definition. A timestamp has no timezone. What varies is how you display or interpret it.
The most common mistake: a developer reads a timestamp from a database, converts it to a DateTime object, and formats it without specifying a timezone. The language or framework silently uses the server's local timezone. The result is a time that is correct to the second but displayed in the wrong timezone - often off by a fixed number of hours, making it look like a rounding or logic error.
Daylight saving time (DST) does not affect Unix timestamps. A timestamp always counts seconds since the epoch regardless of whether DST is active. The timestamp for 2024-03-10 02:30:00 America/New_York does not exist as a local time (the clock skips from 2:00 to 3:00 that day), but a corresponding UTC instant still exists and has a valid timestamp.
// PHP: always specify timezone explicitly
$ts = 1740000000;
// Bad: uses server's local timezone implicitly
$dt = new DateTimeImmutable('@' . $ts);
echo $dt->format('Y-m-d H:i:s'); // might be wrong
// Good: explicit UTC
$dt = new DateTimeImmutable('@' . $ts, new DateTimeZone('UTC'));
echo $dt->format('Y-m-d H:i:s \U\T\C'); // 2025-02-20 02:13:20 UTC
// Converting to a user's timezone
$userTz = new DateTimeZone('America/New_York');
$local = $dt->setTimezone($userTz);
echo $local->format('Y-m-d H:i:s T'); // 2025-02-19 21:13:20 EST
# Python: use timezone-aware datetime objects
from datetime import datetime, timezone, timedelta
ts = 1740000000
# Bad: datetime.fromtimestamp() uses local system timezone
dt_local = datetime.fromtimestamp(ts)
# Good: always attach UTC explicitly
dt_utc = datetime.fromtimestamp(ts, tz=timezone.utc)
print(dt_utc.isoformat()) # 2025-02-20T02:13:20+00:00
# Converting to another timezone
eastern = timezone(timedelta(hours=-5))
dt_eastern = dt_utc.astimezone(eastern)
// JavaScript: Date object is always UTC internally
const ts = 1740000000;
const date = new Date(ts * 1000); // JS expects milliseconds
// UTC output
console.log(date.toISOString()); // "2025-02-20T02:13:20.000Z"
// Local time (browser timezone - avoid in backend logic)
console.log(date.toLocaleString());
The Year 2038 Problem
A 32-bit signed integer can hold values from -2,147,483,648 to 2,147,483,647. The maximum value of 2,147,483,647 seconds after the Unix epoch is 2038-01-19 03:14:07 UTC. After that second, a 32-bit signed counter overflows to -2,147,483,648, which corresponds to 1901-12-13 20:45:52 UTC. This is the Year 2038 problem, also called the Y2K38 problem.
Which systems are still at risk:
- Embedded systems and IoT devices with 32-bit processors running old firmware
- Legacy C code using the
time_ttype on 32-bit platforms (on 64-bit Linux,time_tis already 64 bits) - Old MySQL databases where
TIMESTAMPcolumns are stored as 32-bit integers (affected versions store dates only up to2038-01-19 03:14:07) - Certain older operating systems on 32-bit hardware
The fix is straightforward: use 64-bit integers. A 64-bit signed timestamp extends the valid range to approximately the year 292,277,026,596. Modern 64-bit operating systems, current PHP, Python, and JavaScript runtimes, and recent database versions all handle 64-bit timestamps correctly.
-- MySQL: use DATETIME instead of TIMESTAMP for post-2038 dates
-- TIMESTAMP: stored as 32-bit integer, max 2038-01-19 03:14:07
-- DATETIME: stored differently, range up to 9999-12-31
CREATE TABLE events (
id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
occurred_at DATETIME(6) NOT NULL -- 6 = microsecond precision
);
-- PostgreSQL: TIMESTAMPTZ is already 64-bit, no issue
CREATE TABLE events (
id BIGSERIAL PRIMARY KEY,
occurred_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
Negative Timestamps
Negative Unix timestamps represent dates before January 1, 1970. They are valid, well-defined values.
-1= 1969-12-31 23:59:59 UTC-86400= 1969-12-31 00:00:00 UTC-2208988800= 1900-01-01 00:00:00 UTC
The gotcha: on 32-bit systems, negative timestamps have the same overflow boundary as positive ones. The minimum representable date on a 32-bit system is 1901-12-13 20:45:52 UTC. On a 64-bit system, the minimum is around 292 billion years in the past - effectively unlimited.
// PHP handles negative timestamps correctly on 64-bit systems
$dt = new DateTimeImmutable('@-2208988800');
echo $dt->format('Y-m-d'); // 1900-01-01
// Python
from datetime import datetime, timezone
dt = datetime.fromtimestamp(-2208988800, tz=timezone.utc)
print(dt.isoformat()) # 1900-01-01T00:00:00+00:00
Leap Seconds
The POSIX standard defines Unix time as if every day is exactly 86,400 seconds long. In reality, Earth's rotation is irregular, and leap seconds are occasionally added to UTC to keep it in sync with astronomical time. As of 2024, there have been 27 leap seconds added since 1972.
Unix timestamps ignore them. The second 1483228800 corresponds to both 2017-01-01 00:00:00 UTC (after the leap second) and to the leap second 2016-12-31 23:59:60 UTC itself - the same timestamp maps to two instants. This means:
- Unix timestamps cannot distinguish the leap second from the second immediately following it
- Duration calculations using timestamps can be off by up to 27 seconds for events spanning multiple leap seconds
- TAI (International Atomic Time) is ahead of UTC by 37 seconds as of 2024 and does not apply leap seconds - systems requiring nanosecond precision often use TAI
For most application development, this does not matter. Scheduling, logging, and event ordering at second-level precision are unaffected. It only matters for precision timing systems: financial settlement, GPS receivers, scientific instruments, and telecommunications synchronization.
Code Examples
PHP
<?php
declare(strict_types=1);
// Current timestamp in seconds
$ts = time(); // e.g. 1740000000
// With microseconds
$micro = microtime(true); // e.g. 1740000000.123456
// Create a DateTimeImmutable from a timestamp
$dt = new DateTimeImmutable('@' . $ts);
echo $dt->format(DateTimeInterface::ATOM); // 2025-02-20T02:13:20+00:00
// Convert back to timestamp
$ts2 = $dt->getTimestamp(); // 1740000000
// With Carbon (nesbot/carbon)
use Carbon\Carbon;
$carbon = Carbon::createFromTimestamp($ts, 'UTC');
echo $carbon->toIso8601String(); // 2025-02-20T02:13:20+00:00
echo $carbon->diffForHumans(); // 3 months ago
// Get a timestamp for a specific date
$future = new DateTimeImmutable('2038-01-19 03:14:06', new DateTimeZone('UTC'));
echo $future->getTimestamp(); // 2147483646
Python
import time
from datetime import datetime, timezone, timedelta
# Current time in seconds (float)
ts = time.time() # 1740000000.123456
# Integer seconds
ts_int = int(time.time()) # 1740000000
# Nanoseconds (Python 3.7+)
ts_ns = time.time_ns() # 1740000000123456789
# Convert timestamp to timezone-aware datetime
dt = datetime.fromtimestamp(1740000000, tz=timezone.utc)
print(dt.isoformat()) # 2025-02-20T02:13:20+00:00
# Convert datetime to timestamp
ts = dt.timestamp() # 1740000000.0
# Timezone-aware datetime for a specific zone
eastern = timezone(timedelta(hours=-5))
dt_eastern = dt.astimezone(eastern)
print(dt_eastern.isoformat()) # 2025-02-19T21:13:20-05:00
JavaScript
// Current timestamp in milliseconds
const ms = Date.now(); // 1740000000000
// Convert to seconds for Unix timestamp
const seconds = Math.floor(Date.now() / 1000); // 1740000000
// From timestamp to Date object
const date = new Date(1740000000 * 1000);
console.log(date.toISOString()); // "2025-02-20T02:13:20.000Z"
console.log(date.getFullYear()); // 2025
// From Date object to timestamp (milliseconds)
const ts = date.getTime(); // 1740000000000
// From Date object to Unix seconds
const unix = Math.floor(date.getTime() / 1000); // 1740000000
SQL
-- MySQL: current timestamp and conversions
SELECT UNIX_TIMESTAMP(); -- 1740000000
SELECT UNIX_TIMESTAMP('2025-02-20 02:13:20'); -- 1740000000
SELECT FROM_UNIXTIME(1740000000); -- 2025-02-20 02:13:20
SELECT FROM_UNIXTIME(1740000000, '%Y-%m-%d'); -- 2025-02-20
-- PostgreSQL: extract epoch from timestamp
SELECT EXTRACT(EPOCH FROM NOW()); -- 1740000000.123456
SELECT EXTRACT(EPOCH FROM TIMESTAMPTZ '2025-02-20 02:13:20 UTC'); -- 1740000000
SELECT TO_TIMESTAMP(1740000000); -- 2025-02-20 02:13:20+00
Comparison: Unix Timestamp vs ISO 8601 vs RFC 2822
| Feature | Unix Timestamp | ISO 8601 | RFC 2822 (email) |
|---|---|---|---|
| Example | 1740000000 |
2025-02-20T02:13:20Z |
Thu, 20 Feb 2025 02:13:20 +0000 |
| Timezone info | Always UTC (implicit) | Explicit (Z or +HH:MM) | Explicit offset |
| Human-readable | No | Yes | Yes |
| Sortable as string | Yes (numeric) | Yes | No |
| Precision | Seconds (or ms/us/ns variants) | Variable (seconds to nanoseconds) | Seconds only |
| Size (bytes) | 10 characters | 20-35 characters | 31+ characters |
| DST-safe | Yes | Yes (with UTC/offset) | Yes (with offset) |
| Best for | Storage, arithmetic, APIs | Data interchange, logs | Email headers, HTTP |
| Parse complexity | Trivial (integer) | Moderate | High |
For API design: Unix timestamps are ideal for machine-to-machine communication where you control both sides. ISO 8601 is better when the timestamp will be read by humans or when timezone context matters in the response.
Quick Reference: Timestamp Ranges
| Timestamp | Date (UTC) |
|---|---|
0 |
1970-01-01 00:00:00 |
1000000000 |
2001-09-09 01:46:40 |
1500000000 |
2017-07-14 02:40:00 |
1700000000 |
2023-11-15 09:20:00 |
1740000000 |
2025-02-20 02:13:20 |
2000000000 |
2033-05-18 03:33:20 |
2147483647 |
2038-01-19 03:14:07 (32-bit max) |
4000000000 |
2096-10-02 07:06:40 |
To check any value interactively, use the timestamp converter - paste a timestamp to decode it, or enter a date to get its timestamp.
Conclusion
Unix timestamps are elegant in their simplicity: one integer, always UTC, no ambiguity. The problems come from the edges - precision mismatches between systems, implicit timezone assumptions, and the looming 2038 boundary on 32-bit infrastructure.
The rules to follow:
- Always specify timezone explicitly when converting a timestamp to a human-readable format - never rely on system defaults.
- Know whether your runtime returns seconds or milliseconds before doing any arithmetic.
- Use 64-bit integers for timestamp storage in any database column or API field that will contain dates after 2038.
- For APIs, document the precision (seconds vs milliseconds) in your schema or contract.
- When in doubt, use ISO 8601 at the boundary with external systems - and Unix timestamps internally for arithmetic.
For quick conversions, the timestamp converter handles seconds, milliseconds, and bidirectional conversion between timestamps and human-readable dates.